Feb 13 07:14:47.550185 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 13 07:14:47.550198 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 07:14:47.550205 kernel: BIOS-provided physical RAM map: Feb 13 07:14:47.550209 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 07:14:47.550212 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 07:14:47.550216 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 07:14:47.550220 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 07:14:47.550224 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 07:14:47.550228 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819e2fff] usable Feb 13 07:14:47.550232 kernel: BIOS-e820: [mem 0x00000000819e3000-0x00000000819e3fff] ACPI NVS Feb 13 07:14:47.550237 kernel: BIOS-e820: [mem 0x00000000819e4000-0x00000000819e4fff] reserved Feb 13 07:14:47.550240 kernel: BIOS-e820: [mem 0x00000000819e5000-0x000000008afccfff] usable Feb 13 07:14:47.550244 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 13 07:14:47.550248 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 13 07:14:47.550253 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 13 07:14:47.550258 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 13 07:14:47.550262 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 13 07:14:47.550266 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 13 07:14:47.550270 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 07:14:47.550274 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 07:14:47.550278 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 07:14:47.550283 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 07:14:47.550287 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 07:14:47.550291 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 13 07:14:47.550295 kernel: NX (Execute Disable) protection: active Feb 13 07:14:47.550299 kernel: SMBIOS 3.2.1 present. Feb 13 07:14:47.550304 kernel: DMI: Supermicro SYS-5019C-MR/X11SCM-F, BIOS 1.9 09/16/2022 Feb 13 07:14:47.550308 kernel: tsc: Detected 3400.000 MHz processor Feb 13 07:14:47.550312 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 07:14:47.550316 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 07:14:47.550321 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 07:14:47.550326 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 13 07:14:47.550330 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 07:14:47.550334 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 13 07:14:47.550338 kernel: Using GB pages for direct mapping Feb 13 07:14:47.550343 kernel: ACPI: Early table checksum verification disabled Feb 13 07:14:47.550348 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 07:14:47.550352 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 07:14:47.550356 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 13 07:14:47.550361 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 07:14:47.550367 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 13 07:14:47.550371 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 13 07:14:47.550377 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 13 07:14:47.550381 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 07:14:47.550389 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 07:14:47.550395 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 07:14:47.550399 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 07:14:47.550404 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 07:14:47.550409 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 07:14:47.550413 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:14:47.550419 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 07:14:47.550424 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 07:14:47.550428 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:14:47.550433 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:14:47.550438 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 07:14:47.550456 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 07:14:47.550460 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:14:47.550465 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:14:47.550470 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 07:14:47.550474 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 13 07:14:47.550479 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 07:14:47.550483 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 07:14:47.550488 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 07:14:47.550492 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 13 07:14:47.550497 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 07:14:47.550501 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 07:14:47.550506 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 07:14:47.550511 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 07:14:47.550516 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 07:14:47.550520 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 13 07:14:47.550525 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 13 07:14:47.550529 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 13 07:14:47.550534 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 13 07:14:47.550538 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 13 07:14:47.550543 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 13 07:14:47.550547 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 13 07:14:47.550553 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 13 07:14:47.550557 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 13 07:14:47.550562 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 13 07:14:47.550566 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 13 07:14:47.550571 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 13 07:14:47.550575 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 13 07:14:47.550580 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 13 07:14:47.550584 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 13 07:14:47.550589 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 13 07:14:47.550594 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 13 07:14:47.550598 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 13 07:14:47.550603 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 13 07:14:47.550607 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 13 07:14:47.550612 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 13 07:14:47.550616 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 13 07:14:47.550621 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 13 07:14:47.550625 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 13 07:14:47.550630 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 13 07:14:47.550635 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 13 07:14:47.550640 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 13 07:14:47.550644 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 13 07:14:47.550648 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 13 07:14:47.550653 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 13 07:14:47.550658 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 13 07:14:47.550662 kernel: No NUMA configuration found Feb 13 07:14:47.550667 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 13 07:14:47.550671 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 13 07:14:47.550676 kernel: Zone ranges: Feb 13 07:14:47.550681 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 07:14:47.550686 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 07:14:47.550690 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 13 07:14:47.550695 kernel: Movable zone start for each node Feb 13 07:14:47.550699 kernel: Early memory node ranges Feb 13 07:14:47.550704 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 07:14:47.550708 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 07:14:47.550713 kernel: node 0: [mem 0x0000000040400000-0x00000000819e2fff] Feb 13 07:14:47.550718 kernel: node 0: [mem 0x00000000819e5000-0x000000008afccfff] Feb 13 07:14:47.550722 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 13 07:14:47.550727 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 13 07:14:47.550731 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 13 07:14:47.550736 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 13 07:14:47.550741 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 07:14:47.550749 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 07:14:47.550754 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 07:14:47.550759 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 07:14:47.550764 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 13 07:14:47.550769 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 13 07:14:47.550774 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 13 07:14:47.550779 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 13 07:14:47.550784 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 07:14:47.550789 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 07:14:47.550794 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 07:14:47.550799 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 07:14:47.550804 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 07:14:47.550809 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 07:14:47.550814 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 07:14:47.550819 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 07:14:47.550824 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 07:14:47.550828 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 07:14:47.550833 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 07:14:47.550838 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 07:14:47.550843 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 07:14:47.550848 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 07:14:47.550853 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 07:14:47.550858 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 07:14:47.550863 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 07:14:47.550867 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 07:14:47.550872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 07:14:47.550877 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 07:14:47.550882 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 07:14:47.550887 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 07:14:47.550892 kernel: TSC deadline timer available Feb 13 07:14:47.550897 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 07:14:47.550902 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 13 07:14:47.550907 kernel: Booting paravirtualized kernel on bare hardware Feb 13 07:14:47.550912 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 07:14:47.550917 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 13 07:14:47.550922 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 13 07:14:47.550927 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 13 07:14:47.550931 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 07:14:47.550937 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 13 07:14:47.550942 kernel: Policy zone: Normal Feb 13 07:14:47.550947 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 07:14:47.550952 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 07:14:47.550957 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 07:14:47.550962 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 07:14:47.550967 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 07:14:47.550973 kernel: Memory: 32724720K/33452980K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 13 07:14:47.550978 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 07:14:47.550983 kernel: ftrace: allocating 34475 entries in 135 pages Feb 13 07:14:47.550987 kernel: ftrace: allocated 135 pages with 4 groups Feb 13 07:14:47.550992 kernel: rcu: Hierarchical RCU implementation. Feb 13 07:14:47.550998 kernel: rcu: RCU event tracing is enabled. Feb 13 07:14:47.551003 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 07:14:47.551007 kernel: Rude variant of Tasks RCU enabled. Feb 13 07:14:47.551012 kernel: Tracing variant of Tasks RCU enabled. Feb 13 07:14:47.551018 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 07:14:47.551023 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 07:14:47.551028 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 07:14:47.551033 kernel: random: crng init done Feb 13 07:14:47.551037 kernel: Console: colour dummy device 80x25 Feb 13 07:14:47.551042 kernel: printk: console [tty0] enabled Feb 13 07:14:47.551047 kernel: printk: console [ttyS1] enabled Feb 13 07:14:47.551052 kernel: ACPI: Core revision 20210730 Feb 13 07:14:47.551057 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 13 07:14:47.551062 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 07:14:47.551067 kernel: DMAR: Host address width 39 Feb 13 07:14:47.551072 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 07:14:47.551077 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 07:14:47.551082 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 13 07:14:47.551087 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 13 07:14:47.551092 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 07:14:47.551097 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 07:14:47.551101 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 07:14:47.551107 kernel: x2apic enabled Feb 13 07:14:47.551112 kernel: Switched APIC routing to cluster x2apic. Feb 13 07:14:47.551117 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 07:14:47.551122 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 07:14:47.551127 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 07:14:47.551132 kernel: process: using mwait in idle threads Feb 13 07:14:47.551137 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 07:14:47.551142 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 07:14:47.551146 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 07:14:47.551151 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 07:14:47.551157 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 13 07:14:47.551162 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 07:14:47.551167 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 07:14:47.551171 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 07:14:47.551176 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 07:14:47.551181 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 13 07:14:47.551185 kernel: TAA: Mitigation: TSX disabled Feb 13 07:14:47.551190 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 07:14:47.551195 kernel: SRBDS: Mitigation: Microcode Feb 13 07:14:47.551200 kernel: GDS: Vulnerable: No microcode Feb 13 07:14:47.551205 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 07:14:47.551210 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 07:14:47.551215 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 07:14:47.551220 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 07:14:47.551224 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 07:14:47.551229 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 07:14:47.551234 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 07:14:47.551239 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 07:14:47.551243 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 07:14:47.551248 kernel: Freeing SMP alternatives memory: 32K Feb 13 07:14:47.551253 kernel: pid_max: default: 32768 minimum: 301 Feb 13 07:14:47.551258 kernel: LSM: Security Framework initializing Feb 13 07:14:47.551263 kernel: SELinux: Initializing. Feb 13 07:14:47.551268 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 07:14:47.551273 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 07:14:47.551278 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 07:14:47.551283 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 07:14:47.551287 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 07:14:47.551292 kernel: ... version: 4 Feb 13 07:14:47.551297 kernel: ... bit width: 48 Feb 13 07:14:47.551302 kernel: ... generic registers: 4 Feb 13 07:14:47.551307 kernel: ... value mask: 0000ffffffffffff Feb 13 07:14:47.551312 kernel: ... max period: 00007fffffffffff Feb 13 07:14:47.551317 kernel: ... fixed-purpose events: 3 Feb 13 07:14:47.551322 kernel: ... event mask: 000000070000000f Feb 13 07:14:47.551327 kernel: signal: max sigframe size: 2032 Feb 13 07:14:47.551332 kernel: rcu: Hierarchical SRCU implementation. Feb 13 07:14:47.551336 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 07:14:47.551341 kernel: smp: Bringing up secondary CPUs ... Feb 13 07:14:47.551346 kernel: x86: Booting SMP configuration: Feb 13 07:14:47.551351 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 13 07:14:47.551356 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 07:14:47.551362 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 13 07:14:47.551366 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 07:14:47.551371 kernel: smpboot: Max logical packages: 1 Feb 13 07:14:47.551376 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 07:14:47.551381 kernel: devtmpfs: initialized Feb 13 07:14:47.551387 kernel: x86/mm: Memory block size: 128MB Feb 13 07:14:47.551392 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819e3000-0x819e3fff] (4096 bytes) Feb 13 07:14:47.551414 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 13 07:14:47.551419 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 07:14:47.551424 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 07:14:47.551429 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 07:14:47.551434 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 07:14:47.551439 kernel: audit: initializing netlink subsys (disabled) Feb 13 07:14:47.551444 kernel: audit: type=2000 audit(1707808482.040:1): state=initialized audit_enabled=0 res=1 Feb 13 07:14:47.551449 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 07:14:47.551454 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 07:14:47.551459 kernel: cpuidle: using governor menu Feb 13 07:14:47.551465 kernel: ACPI: bus type PCI registered Feb 13 07:14:47.551470 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 07:14:47.551475 kernel: dca service started, version 1.12.1 Feb 13 07:14:47.551480 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 07:14:47.551484 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 13 07:14:47.551489 kernel: PCI: Using configuration type 1 for base access Feb 13 07:14:47.551494 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 07:14:47.551499 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 07:14:47.551504 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 07:14:47.551510 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 07:14:47.551515 kernel: ACPI: Added _OSI(Module Device) Feb 13 07:14:47.551520 kernel: ACPI: Added _OSI(Processor Device) Feb 13 07:14:47.551525 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 07:14:47.551530 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 07:14:47.551535 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 13 07:14:47.551540 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 13 07:14:47.551545 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 13 07:14:47.551550 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 07:14:47.551555 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:14:47.551560 kernel: ACPI: SSDT 0xFFFF938C80212C00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 07:14:47.551565 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 13 07:14:47.551570 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:14:47.551575 kernel: ACPI: SSDT 0xFFFF938C81AE1C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 07:14:47.551580 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:14:47.551585 kernel: ACPI: SSDT 0xFFFF938C81A5B800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 07:14:47.551590 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:14:47.551595 kernel: ACPI: SSDT 0xFFFF938C81A58000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 07:14:47.551600 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:14:47.551605 kernel: ACPI: SSDT 0xFFFF938C80149000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 07:14:47.551610 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:14:47.551615 kernel: ACPI: SSDT 0xFFFF938C81AE2800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 07:14:47.551620 kernel: ACPI: Interpreter enabled Feb 13 07:14:47.551625 kernel: ACPI: PM: (supports S0 S5) Feb 13 07:14:47.551630 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 07:14:47.551635 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 07:14:47.551640 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 07:14:47.551644 kernel: HEST: Table parsing has been initialized. Feb 13 07:14:47.551650 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 07:14:47.551655 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 07:14:47.551660 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 07:14:47.551665 kernel: ACPI: PM: Power Resource [USBC] Feb 13 07:14:47.551670 kernel: ACPI: PM: Power Resource [V0PR] Feb 13 07:14:47.551675 kernel: ACPI: PM: Power Resource [V1PR] Feb 13 07:14:47.551680 kernel: ACPI: PM: Power Resource [V2PR] Feb 13 07:14:47.551684 kernel: ACPI: PM: Power Resource [WRST] Feb 13 07:14:47.551689 kernel: ACPI: PM: Power Resource [FN00] Feb 13 07:14:47.551695 kernel: ACPI: PM: Power Resource [FN01] Feb 13 07:14:47.551700 kernel: ACPI: PM: Power Resource [FN02] Feb 13 07:14:47.551705 kernel: ACPI: PM: Power Resource [FN03] Feb 13 07:14:47.551710 kernel: ACPI: PM: Power Resource [FN04] Feb 13 07:14:47.551715 kernel: ACPI: PM: Power Resource [PIN] Feb 13 07:14:47.551719 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 07:14:47.551784 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 07:14:47.551829 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 07:14:47.551873 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 07:14:47.551880 kernel: PCI host bridge to bus 0000:00 Feb 13 07:14:47.551926 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 07:14:47.551963 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 07:14:47.552000 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 07:14:47.552036 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 13 07:14:47.552072 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 07:14:47.552110 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 07:14:47.552159 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 07:14:47.552208 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 07:14:47.552251 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 07:14:47.552297 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 07:14:47.552341 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 13 07:14:47.552391 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 07:14:47.552435 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 13 07:14:47.552482 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 07:14:47.552525 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 13 07:14:47.552568 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 07:14:47.552615 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 07:14:47.552659 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 13 07:14:47.552700 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 13 07:14:47.552746 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 07:14:47.552788 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 07:14:47.552832 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 07:14:47.552874 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 07:14:47.552920 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 07:14:47.552961 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 13 07:14:47.553001 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 07:14:47.553046 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 07:14:47.553087 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 13 07:14:47.553128 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 07:14:47.553174 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 07:14:47.553217 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 13 07:14:47.553258 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 07:14:47.553302 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 07:14:47.553344 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 13 07:14:47.553384 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 13 07:14:47.553428 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 13 07:14:47.553469 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 13 07:14:47.553517 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 13 07:14:47.553560 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 13 07:14:47.553601 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 07:14:47.553646 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 07:14:47.553688 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 07:14:47.553733 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 07:14:47.553775 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 07:14:47.553822 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 07:14:47.553863 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 07:14:47.553909 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 07:14:47.553951 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 07:14:47.553998 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 13 07:14:47.554043 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 13 07:14:47.554088 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 07:14:47.554131 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 07:14:47.554176 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 07:14:47.554224 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 07:14:47.554266 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 13 07:14:47.554307 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 07:14:47.554353 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 07:14:47.554396 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 07:14:47.554445 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 07:14:47.554491 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 07:14:47.554534 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 13 07:14:47.554577 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 13 07:14:47.554619 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 07:14:47.554662 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 07:14:47.554712 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 07:14:47.554755 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 07:14:47.554800 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 13 07:14:47.554843 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 13 07:14:47.554887 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 07:14:47.554930 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 07:14:47.554972 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 07:14:47.555017 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 07:14:47.555059 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 07:14:47.555103 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 07:14:47.555151 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 13 07:14:47.555195 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 13 07:14:47.555238 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 07:14:47.555281 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 13 07:14:47.555323 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 07:14:47.555366 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 07:14:47.555410 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 07:14:47.555454 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 07:14:47.555501 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 07:14:47.555545 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 13 07:14:47.555589 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 07:14:47.555650 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 13 07:14:47.555693 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 07:14:47.555734 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 07:14:47.555776 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 07:14:47.555818 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 07:14:47.555860 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 07:14:47.555906 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 07:14:47.555949 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 13 07:14:47.555992 kernel: pci 0000:06:00.0: supports D1 D2 Feb 13 07:14:47.556034 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 07:14:47.556077 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 07:14:47.556151 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 07:14:47.556213 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 07:14:47.556259 kernel: pci_bus 0000:07: extended config space not accessible Feb 13 07:14:47.556308 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 07:14:47.556353 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 13 07:14:47.556421 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 13 07:14:47.556485 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 07:14:47.556530 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 07:14:47.556576 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 07:14:47.556621 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 07:14:47.556665 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 07:14:47.556707 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 07:14:47.556749 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 07:14:47.556757 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 07:14:47.556762 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 07:14:47.556769 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 07:14:47.556774 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 07:14:47.556779 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 07:14:47.556784 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 07:14:47.556789 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 07:14:47.556795 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 07:14:47.556800 kernel: iommu: Default domain type: Translated Feb 13 07:14:47.556805 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 07:14:47.556849 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 13 07:14:47.556895 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 07:14:47.556940 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 13 07:14:47.556947 kernel: vgaarb: loaded Feb 13 07:14:47.556953 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 07:14:47.556958 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 07:14:47.556963 kernel: PTP clock support registered Feb 13 07:14:47.556969 kernel: PCI: Using ACPI for IRQ routing Feb 13 07:14:47.556974 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 07:14:47.556979 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 07:14:47.556985 kernel: e820: reserve RAM buffer [mem 0x819e3000-0x83ffffff] Feb 13 07:14:47.556990 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 13 07:14:47.556995 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 13 07:14:47.557000 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 13 07:14:47.557005 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 13 07:14:47.557010 kernel: clocksource: Switched to clocksource tsc-early Feb 13 07:14:47.557016 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 07:14:47.557021 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 07:14:47.557026 kernel: pnp: PnP ACPI init Feb 13 07:14:47.557072 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 07:14:47.557114 kernel: pnp 00:02: [dma 0 disabled] Feb 13 07:14:47.557155 kernel: pnp 00:03: [dma 0 disabled] Feb 13 07:14:47.557196 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 07:14:47.557234 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 07:14:47.557274 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 07:14:47.557316 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 07:14:47.557354 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 07:14:47.557415 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 07:14:47.557472 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 07:14:47.557508 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 07:14:47.557545 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 07:14:47.557582 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 07:14:47.557621 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 07:14:47.557663 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 07:14:47.557701 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 07:14:47.557737 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 07:14:47.557774 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 07:14:47.557811 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 07:14:47.557848 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 07:14:47.557887 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 07:14:47.557927 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 07:14:47.557935 kernel: pnp: PnP ACPI: found 10 devices Feb 13 07:14:47.557940 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 07:14:47.557945 kernel: NET: Registered PF_INET protocol family Feb 13 07:14:47.557951 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 07:14:47.557956 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 07:14:47.557962 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 07:14:47.557968 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 07:14:47.557973 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 13 07:14:47.557978 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 07:14:47.557984 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 07:14:47.557989 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 07:14:47.557994 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 07:14:47.557999 kernel: NET: Registered PF_XDP protocol family Feb 13 07:14:47.558041 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 13 07:14:47.558084 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 13 07:14:47.558126 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 13 07:14:47.558169 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 07:14:47.558212 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 07:14:47.558256 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 07:14:47.558299 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 07:14:47.558343 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 07:14:47.558384 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 07:14:47.558473 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 07:14:47.558513 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 07:14:47.558555 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 07:14:47.558595 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 07:14:47.558637 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 07:14:47.558680 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 07:14:47.558722 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 07:14:47.558763 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 07:14:47.558804 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 07:14:47.558847 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 07:14:47.558889 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 07:14:47.558932 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 07:14:47.558973 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 07:14:47.559016 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 07:14:47.559057 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 07:14:47.559097 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 07:14:47.559134 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 07:14:47.559170 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 07:14:47.559206 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 07:14:47.559242 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 13 07:14:47.559277 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 07:14:47.559319 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 13 07:14:47.559360 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 07:14:47.559426 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 13 07:14:47.559466 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 13 07:14:47.559510 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 07:14:47.559549 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 13 07:14:47.559593 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 13 07:14:47.559634 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 13 07:14:47.559674 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 07:14:47.559715 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 13 07:14:47.559723 kernel: PCI: CLS 64 bytes, default 64 Feb 13 07:14:47.559728 kernel: DMAR: No ATSR found Feb 13 07:14:47.559734 kernel: DMAR: No SATC found Feb 13 07:14:47.559739 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 07:14:47.559783 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 07:14:47.559827 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 07:14:47.559870 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 13 07:14:47.559912 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 13 07:14:47.559953 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 13 07:14:47.559994 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 13 07:14:47.560037 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 13 07:14:47.560078 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 13 07:14:47.560122 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 13 07:14:47.560163 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 13 07:14:47.560206 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 13 07:14:47.560247 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 13 07:14:47.560289 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 13 07:14:47.560331 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 13 07:14:47.560373 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 13 07:14:47.560420 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 13 07:14:47.560462 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 13 07:14:47.560506 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 13 07:14:47.560548 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 13 07:14:47.560591 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 13 07:14:47.560650 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 13 07:14:47.560692 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 13 07:14:47.560735 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 13 07:14:47.560777 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 13 07:14:47.560821 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 07:14:47.560866 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 13 07:14:47.560913 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 13 07:14:47.560920 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 07:14:47.560926 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 07:14:47.560931 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 13 07:14:47.560936 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 13 07:14:47.560941 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 07:14:47.560947 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 07:14:47.560953 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 07:14:47.560997 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 07:14:47.561005 kernel: Initialise system trusted keyrings Feb 13 07:14:47.561011 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 07:14:47.561016 kernel: Key type asymmetric registered Feb 13 07:14:47.561021 kernel: Asymmetric key parser 'x509' registered Feb 13 07:14:47.561026 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 13 07:14:47.561031 kernel: io scheduler mq-deadline registered Feb 13 07:14:47.561038 kernel: io scheduler kyber registered Feb 13 07:14:47.561043 kernel: io scheduler bfq registered Feb 13 07:14:47.561085 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 13 07:14:47.561127 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 13 07:14:47.561169 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 13 07:14:47.561210 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 13 07:14:47.561251 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 13 07:14:47.561293 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 13 07:14:47.561340 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 07:14:47.561348 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 07:14:47.561354 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 07:14:47.561359 kernel: pstore: Registered erst as persistent store backend Feb 13 07:14:47.561365 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 07:14:47.561370 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 07:14:47.561375 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 07:14:47.561380 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 07:14:47.561389 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 13 07:14:47.561470 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 07:14:47.561478 kernel: i8042: PNP: No PS/2 controller found. Feb 13 07:14:47.561515 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 07:14:47.561553 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 07:14:47.561591 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-13T07:14:46 UTC (1707808486) Feb 13 07:14:47.561629 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 07:14:47.561636 kernel: fail to initialize ptp_kvm Feb 13 07:14:47.561642 kernel: intel_pstate: Intel P-state driver initializing Feb 13 07:14:47.561648 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 07:14:47.561653 kernel: intel_pstate: HWP enabled Feb 13 07:14:47.561658 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 13 07:14:47.561663 kernel: vesafb: scrolling: redraw Feb 13 07:14:47.561669 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 13 07:14:47.561674 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000729ed3bd, using 768k, total 768k Feb 13 07:14:47.561679 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 07:14:47.561684 kernel: fb0: VESA VGA frame buffer device Feb 13 07:14:47.561690 kernel: NET: Registered PF_INET6 protocol family Feb 13 07:14:47.561695 kernel: Segment Routing with IPv6 Feb 13 07:14:47.561701 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 07:14:47.561706 kernel: NET: Registered PF_PACKET protocol family Feb 13 07:14:47.561711 kernel: Key type dns_resolver registered Feb 13 07:14:47.561716 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 13 07:14:47.561721 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 07:14:47.561726 kernel: IPI shorthand broadcast: enabled Feb 13 07:14:47.561732 kernel: sched_clock: Marking stable (1732748756, 1339448669)->(4492321384, -1420123959) Feb 13 07:14:47.561738 kernel: registered taskstats version 1 Feb 13 07:14:47.561743 kernel: Loading compiled-in X.509 certificates Feb 13 07:14:47.561748 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 13 07:14:47.561753 kernel: Key type .fscrypt registered Feb 13 07:14:47.561758 kernel: Key type fscrypt-provisioning registered Feb 13 07:14:47.561763 kernel: pstore: Using crash dump compression: deflate Feb 13 07:14:47.561768 kernel: ima: Allocated hash algorithm: sha1 Feb 13 07:14:47.561774 kernel: ima: No architecture policies found Feb 13 07:14:47.561780 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 13 07:14:47.561785 kernel: Write protecting the kernel read-only data: 28672k Feb 13 07:14:47.561790 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 13 07:14:47.561795 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 13 07:14:47.561800 kernel: Run /init as init process Feb 13 07:14:47.561806 kernel: with arguments: Feb 13 07:14:47.561811 kernel: /init Feb 13 07:14:47.561816 kernel: with environment: Feb 13 07:14:47.561821 kernel: HOME=/ Feb 13 07:14:47.561827 kernel: TERM=linux Feb 13 07:14:47.561832 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 07:14:47.561838 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 07:14:47.561845 systemd[1]: Detected architecture x86-64. Feb 13 07:14:47.561850 systemd[1]: Running in initrd. Feb 13 07:14:47.561855 systemd[1]: No hostname configured, using default hostname. Feb 13 07:14:47.561861 systemd[1]: Hostname set to . Feb 13 07:14:47.561866 systemd[1]: Initializing machine ID from random generator. Feb 13 07:14:47.561872 systemd[1]: Queued start job for default target initrd.target. Feb 13 07:14:47.561878 systemd[1]: Started systemd-ask-password-console.path. Feb 13 07:14:47.561883 systemd[1]: Reached target cryptsetup.target. Feb 13 07:14:47.561888 systemd[1]: Reached target paths.target. Feb 13 07:14:47.561893 systemd[1]: Reached target slices.target. Feb 13 07:14:47.561899 systemd[1]: Reached target swap.target. Feb 13 07:14:47.561904 systemd[1]: Reached target timers.target. Feb 13 07:14:47.561909 systemd[1]: Listening on iscsid.socket. Feb 13 07:14:47.561916 systemd[1]: Listening on iscsiuio.socket. Feb 13 07:14:47.561921 systemd[1]: Listening on systemd-journald-audit.socket. Feb 13 07:14:47.561926 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 13 07:14:47.561932 systemd[1]: Listening on systemd-journald.socket. Feb 13 07:14:47.561937 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Feb 13 07:14:47.561943 systemd[1]: Listening on systemd-networkd.socket. Feb 13 07:14:47.561948 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Feb 13 07:14:47.561953 kernel: clocksource: Switched to clocksource tsc Feb 13 07:14:47.561959 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 07:14:47.561965 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 07:14:47.561970 systemd[1]: Reached target sockets.target. Feb 13 07:14:47.561976 systemd[1]: Starting kmod-static-nodes.service... Feb 13 07:14:47.561981 systemd[1]: Finished network-cleanup.service. Feb 13 07:14:47.561986 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 07:14:47.561992 systemd[1]: Starting systemd-journald.service... Feb 13 07:14:47.561997 systemd[1]: Starting systemd-modules-load.service... Feb 13 07:14:47.562005 systemd-journald[267]: Journal started Feb 13 07:14:47.562030 systemd-journald[267]: Runtime Journal (/run/log/journal/8c8e340d93c042f7a21953cc26cbf7b3) is 8.0M, max 640.1M, 632.1M free. Feb 13 07:14:47.565171 systemd-modules-load[268]: Inserted module 'overlay' Feb 13 07:14:47.586486 kernel: audit: type=1334 audit(1707808487.571:2): prog-id=6 op=LOAD Feb 13 07:14:47.586496 systemd[1]: Starting systemd-resolved.service... Feb 13 07:14:47.571000 audit: BPF prog-id=6 op=LOAD Feb 13 07:14:47.639427 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 07:14:47.639442 systemd[1]: Starting systemd-vconsole-setup.service... Feb 13 07:14:47.671429 kernel: Bridge firewalling registered Feb 13 07:14:47.671447 systemd[1]: Started systemd-journald.service. Feb 13 07:14:47.685658 systemd-modules-load[268]: Inserted module 'br_netfilter' Feb 13 07:14:47.735264 kernel: audit: type=1130 audit(1707808487.693:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:47.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:47.692000 systemd-resolved[270]: Positive Trust Anchors: Feb 13 07:14:47.792290 kernel: SCSI subsystem initialized Feb 13 07:14:47.792300 kernel: audit: type=1130 audit(1707808487.747:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:47.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:47.692006 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 07:14:47.914466 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 07:14:47.914481 kernel: audit: type=1130 audit(1707808487.818:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:47.914488 kernel: device-mapper: uevent: version 1.0.3 Feb 13 07:14:47.914495 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 13 07:14:47.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:47.692024 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 07:14:47.987650 kernel: audit: type=1130 audit(1707808487.922:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:47.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:47.693545 systemd-resolved[270]: Defaulting to hostname 'linux'. Feb 13 07:14:47.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:47.693649 systemd[1]: Finished kmod-static-nodes.service. Feb 13 07:14:48.096991 kernel: audit: type=1130 audit(1707808487.996:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:48.097002 kernel: audit: type=1130 audit(1707808488.050:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:48.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:47.747521 systemd[1]: Started systemd-resolved.service. Feb 13 07:14:47.818552 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 07:14:47.915609 systemd-modules-load[268]: Inserted module 'dm_multipath' Feb 13 07:14:47.922683 systemd[1]: Finished systemd-modules-load.service. Feb 13 07:14:47.996723 systemd[1]: Finished systemd-vconsole-setup.service. Feb 13 07:14:48.050709 systemd[1]: Reached target nss-lookup.target. Feb 13 07:14:48.105970 systemd[1]: Starting dracut-cmdline-ask.service... Feb 13 07:14:48.125944 systemd[1]: Starting systemd-sysctl.service... Feb 13 07:14:48.126240 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 07:14:48.129073 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 07:14:48.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:48.129935 systemd[1]: Finished systemd-sysctl.service. Feb 13 07:14:48.179592 kernel: audit: type=1130 audit(1707808488.128:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:48.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:48.191738 systemd[1]: Finished dracut-cmdline-ask.service. Feb 13 07:14:48.256477 kernel: audit: type=1130 audit(1707808488.191:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:48.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:48.248989 systemd[1]: Starting dracut-cmdline.service... Feb 13 07:14:48.272489 dracut-cmdline[294]: dracut-dracut-053 Feb 13 07:14:48.272489 dracut-cmdline[294]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 13 07:14:48.272489 dracut-cmdline[294]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 07:14:48.340479 kernel: Loading iSCSI transport class v2.0-870. Feb 13 07:14:48.340491 kernel: iscsi: registered transport (tcp) Feb 13 07:14:48.389530 kernel: iscsi: registered transport (qla4xxx) Feb 13 07:14:48.389545 kernel: QLogic iSCSI HBA Driver Feb 13 07:14:48.405985 systemd[1]: Finished dracut-cmdline.service. Feb 13 07:14:48.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:48.406546 systemd[1]: Starting dracut-pre-udev.service... Feb 13 07:14:48.461459 kernel: raid6: avx2x4 gen() 48095 MB/s Feb 13 07:14:48.496458 kernel: raid6: avx2x4 xor() 14858 MB/s Feb 13 07:14:48.531418 kernel: raid6: avx2x2 gen() 51914 MB/s Feb 13 07:14:48.566425 kernel: raid6: avx2x2 xor() 32131 MB/s Feb 13 07:14:48.601464 kernel: raid6: avx2x1 gen() 44503 MB/s Feb 13 07:14:48.636419 kernel: raid6: avx2x1 xor() 27937 MB/s Feb 13 07:14:48.670421 kernel: raid6: sse2x4 gen() 21344 MB/s Feb 13 07:14:48.704462 kernel: raid6: sse2x4 xor() 11846 MB/s Feb 13 07:14:48.738459 kernel: raid6: sse2x2 gen() 21697 MB/s Feb 13 07:14:48.772418 kernel: raid6: sse2x2 xor() 13461 MB/s Feb 13 07:14:48.806460 kernel: raid6: sse2x1 gen() 18298 MB/s Feb 13 07:14:48.858406 kernel: raid6: sse2x1 xor() 8925 MB/s Feb 13 07:14:48.858421 kernel: raid6: using algorithm avx2x2 gen() 51914 MB/s Feb 13 07:14:48.858429 kernel: raid6: .... xor() 32131 MB/s, rmw enabled Feb 13 07:14:48.876656 kernel: raid6: using avx2x2 recovery algorithm Feb 13 07:14:48.923453 kernel: xor: automatically using best checksumming function avx Feb 13 07:14:49.002396 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 13 07:14:49.007183 systemd[1]: Finished dracut-pre-udev.service. Feb 13 07:14:49.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:49.007000 audit: BPF prog-id=7 op=LOAD Feb 13 07:14:49.007000 audit: BPF prog-id=8 op=LOAD Feb 13 07:14:49.008012 systemd[1]: Starting systemd-udevd.service... Feb 13 07:14:49.015867 systemd-udevd[475]: Using default interface naming scheme 'v252'. Feb 13 07:14:49.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:49.028738 systemd[1]: Started systemd-udevd.service. Feb 13 07:14:49.071535 dracut-pre-trigger[487]: rd.md=0: removing MD RAID activation Feb 13 07:14:49.046013 systemd[1]: Starting dracut-pre-trigger.service... Feb 13 07:14:49.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:49.073991 systemd[1]: Finished dracut-pre-trigger.service. Feb 13 07:14:49.089565 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 07:14:49.140161 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 07:14:49.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:49.167407 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 07:14:49.204195 kernel: ACPI: bus type USB registered Feb 13 07:14:49.204233 kernel: usbcore: registered new interface driver usbfs Feb 13 07:14:49.204242 kernel: usbcore: registered new interface driver hub Feb 13 07:14:49.239753 kernel: usbcore: registered new device driver usb Feb 13 07:14:49.240435 kernel: libata version 3.00 loaded. Feb 13 07:14:49.265428 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 07:14:49.265460 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 13 07:14:49.303469 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 07:14:49.303559 kernel: AES CTR mode by8 optimization enabled Feb 13 07:14:49.304421 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 07:14:49.357326 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 07:14:49.357350 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 13 07:14:49.357434 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 07:14:49.357447 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 07:14:49.394393 kernel: scsi host0: ahci Feb 13 07:14:49.421013 kernel: scsi host1: ahci Feb 13 07:14:49.421109 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 07:14:49.421174 kernel: scsi host2: ahci Feb 13 07:14:49.429390 kernel: pps pps0: new PPS source ptp0 Feb 13 07:14:49.429467 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 13 07:14:49.429541 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 07:14:49.429602 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:24 Feb 13 07:14:49.429661 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 13 07:14:49.429720 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 07:14:49.449955 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 07:14:49.464211 kernel: scsi host3: ahci Feb 13 07:14:49.479392 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 07:14:49.479509 kernel: scsi host4: ahci Feb 13 07:14:49.479524 kernel: pps pps1: new PPS source ptp1 Feb 13 07:14:49.479577 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 13 07:14:49.479630 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 07:14:49.479679 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:25 Feb 13 07:14:49.479726 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 13 07:14:49.479773 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 07:14:49.495445 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 07:14:49.524734 kernel: scsi host5: ahci Feb 13 07:14:49.524798 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 07:14:49.556126 kernel: scsi host6: ahci Feb 13 07:14:49.556150 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 07:14:49.563390 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 07:14:49.576399 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Feb 13 07:14:49.576435 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 07:14:49.593886 kernel: hub 1-0:1.0: USB hub found Feb 13 07:14:49.594093 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Feb 13 07:14:49.595436 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 13 07:14:49.631995 kernel: hub 1-0:1.0: 16 ports detected Feb 13 07:14:49.632074 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Feb 13 07:14:49.659727 kernel: hub 2-0:1.0: USB hub found Feb 13 07:14:49.659808 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Feb 13 07:14:49.675284 kernel: hub 2-0:1.0: 10 ports detected Feb 13 07:14:49.675359 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Feb 13 07:14:49.699597 kernel: usb: port power management may be unreliable Feb 13 07:14:49.699613 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Feb 13 07:14:49.777919 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 07:14:49.777992 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Feb 13 07:14:49.887391 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 07:14:50.046462 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 13 07:14:50.093197 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 13 07:14:50.093275 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 07:14:50.186923 kernel: hub 1-14:1.0: USB hub found Feb 13 07:14:50.187003 kernel: hub 1-14:1.0: 4 ports detected Feb 13 07:14:50.208438 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 07:14:50.208456 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 07:14:50.225390 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 07:14:50.242392 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 07:14:50.258392 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 07:14:50.274433 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 07:14:50.290438 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Feb 13 07:14:50.307433 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 07:14:50.323427 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU004, max UDMA/133 Feb 13 07:14:50.376682 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 07:14:50.376699 kernel: ata2.00: Features: NCQ-prio Feb 13 07:14:50.376707 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 07:14:50.376774 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 07:14:50.429431 kernel: port_module: 9 callbacks suppressed Feb 13 07:14:50.429448 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 13 07:14:50.429517 kernel: ata1.00: Features: NCQ-prio Feb 13 07:14:50.448424 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 07:14:50.462420 kernel: ata2.00: configured for UDMA/133 Feb 13 07:14:50.482429 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 07:14:50.515432 kernel: ata1.00: configured for UDMA/133 Feb 13 07:14:50.530414 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U004 PQ: 0 ANSI: 5 Feb 13 07:14:50.550427 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Feb 13 07:14:50.592848 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:14:50.592865 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 07:14:50.635425 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 07:14:50.635504 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 07:14:50.635565 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 07:14:50.635623 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 07:14:50.645655 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 13 07:14:50.645734 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 13 07:14:50.661395 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 07:14:50.681404 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 07:14:50.681481 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 07:14:50.681541 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 07:14:50.697230 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 07:14:50.777168 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 07:14:50.777183 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 07:14:50.777253 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 07:14:50.811664 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 13 07:14:50.827463 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:14:50.878149 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 07:14:50.878165 kernel: GPT:9289727 != 937703087 Feb 13 07:14:50.878172 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 07:14:50.894342 kernel: GPT:9289727 != 937703087 Feb 13 07:14:50.907614 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 07:14:50.922376 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 07:14:50.951543 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:14:50.951559 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 07:14:50.984393 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Feb 13 07:14:51.005392 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 13 07:14:51.033491 kernel: usbcore: registered new interface driver usbhid Feb 13 07:14:51.033505 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (542) Feb 13 07:14:51.033512 kernel: usbhid: USB HID core driver Feb 13 07:14:51.063770 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 13 07:14:51.092492 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 07:14:51.092504 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Feb 13 07:14:51.068979 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 13 07:14:51.105186 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 13 07:14:51.117367 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 07:14:51.240504 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 07:14:51.240810 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 07:14:51.240828 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 07:14:51.241060 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:14:51.229463 systemd[1]: Starting disk-uuid.service... Feb 13 07:14:51.266484 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 07:14:51.266628 disk-uuid[692]: Primary Header is updated. Feb 13 07:14:51.266628 disk-uuid[692]: Secondary Entries is updated. Feb 13 07:14:51.266628 disk-uuid[692]: Secondary Header is updated. Feb 13 07:14:51.319464 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:14:51.319487 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 07:14:51.319503 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:14:51.345442 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 07:14:52.325170 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:14:52.345317 disk-uuid[693]: The operation has completed successfully. Feb 13 07:14:52.354569 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 07:14:52.382353 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 07:14:52.479749 kernel: audit: type=1130 audit(1707808492.389:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:52.479764 kernel: audit: type=1131 audit(1707808492.389:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:52.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:52.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:52.382458 systemd[1]: Finished disk-uuid.service. Feb 13 07:14:52.510421 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 07:14:52.395105 systemd[1]: Starting verity-setup.service... Feb 13 07:14:52.539067 systemd[1]: Found device dev-mapper-usr.device. Feb 13 07:14:52.549430 systemd[1]: Mounting sysusr-usr.mount... Feb 13 07:14:52.564626 systemd[1]: Finished verity-setup.service. Feb 13 07:14:52.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:52.630393 kernel: audit: type=1130 audit(1707808492.583:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:52.658121 systemd[1]: Mounted sysusr-usr.mount. Feb 13 07:14:52.673554 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 13 07:14:52.665665 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 13 07:14:52.755712 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 07:14:52.755728 kernel: BTRFS info (device sda6): using free space tree Feb 13 07:14:52.755735 kernel: BTRFS info (device sda6): has skinny extents Feb 13 07:14:52.755742 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 07:14:52.666051 systemd[1]: Starting ignition-setup.service... Feb 13 07:14:52.688801 systemd[1]: Starting parse-ip-for-networkd.service... Feb 13 07:14:52.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:52.763932 systemd[1]: Finished parse-ip-for-networkd.service. Feb 13 07:14:52.886893 kernel: audit: type=1130 audit(1707808492.780:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:52.886910 kernel: audit: type=1130 audit(1707808492.838:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:52.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:52.801253 systemd[1]: Finished ignition-setup.service. Feb 13 07:14:52.917067 kernel: audit: type=1334 audit(1707808492.894:24): prog-id=9 op=LOAD Feb 13 07:14:52.894000 audit: BPF prog-id=9 op=LOAD Feb 13 07:14:52.839047 systemd[1]: Starting ignition-fetch-offline.service... Feb 13 07:14:52.895245 systemd[1]: Starting systemd-networkd.service... Feb 13 07:14:52.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:52.930496 systemd-networkd[876]: lo: Link UP Feb 13 07:14:53.005614 kernel: audit: type=1130 audit(1707808492.939:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:52.959205 ignition[869]: Ignition 2.14.0 Feb 13 07:14:52.930498 systemd-networkd[876]: lo: Gained carrier Feb 13 07:14:53.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:52.959225 ignition[869]: Stage: fetch-offline Feb 13 07:14:53.158135 kernel: audit: type=1130 audit(1707808493.026:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:53.158156 kernel: audit: type=1130 audit(1707808493.084:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:53.158164 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 07:14:53.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:52.930785 systemd-networkd[876]: Enumeration completed Feb 13 07:14:53.183330 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Feb 13 07:14:52.959248 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:14:52.930825 systemd[1]: Started systemd-networkd.service. Feb 13 07:14:53.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:52.959261 ignition[869]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:14:52.931374 systemd-networkd[876]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 07:14:52.962283 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:14:53.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:53.254560 iscsid[908]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 13 07:14:53.254560 iscsid[908]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 13 07:14:53.254560 iscsid[908]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 13 07:14:53.254560 iscsid[908]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 13 07:14:53.254560 iscsid[908]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 13 07:14:53.254560 iscsid[908]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 13 07:14:53.254560 iscsid[908]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 13 07:14:53.398623 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 07:14:53.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:52.939456 systemd[1]: Reached target network.target. Feb 13 07:14:52.962347 ignition[869]: parsed url from cmdline: "" Feb 13 07:14:52.987416 unknown[869]: fetched base config from "system" Feb 13 07:14:52.962349 ignition[869]: no config URL provided Feb 13 07:14:52.987420 unknown[869]: fetched user config from "system" Feb 13 07:14:52.962352 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 07:14:52.999064 systemd[1]: Starting iscsiuio.service... Feb 13 07:14:52.967493 ignition[869]: parsing config with SHA512: f66cf9df5c474513a6ae67c76c7b4c583c3cce03f3d21ef01638806b865aea7ce3898f338a488d9693fbe469cd7e657ed2814abc06ae57bbaf2b7a6b9cf05f00 Feb 13 07:14:53.012735 systemd[1]: Started iscsiuio.service. Feb 13 07:14:52.987777 ignition[869]: fetch-offline: fetch-offline passed Feb 13 07:14:53.026587 systemd[1]: Finished ignition-fetch-offline.service. Feb 13 07:14:52.987780 ignition[869]: POST message to Packet Timeline Feb 13 07:14:53.084712 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 07:14:52.987784 ignition[869]: POST Status error: resource requires networking Feb 13 07:14:53.085212 systemd[1]: Starting ignition-kargs.service... Feb 13 07:14:52.987813 ignition[869]: Ignition finished successfully Feb 13 07:14:53.159542 systemd-networkd[876]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 07:14:53.162882 ignition[898]: Ignition 2.14.0 Feb 13 07:14:53.172001 systemd[1]: Starting iscsid.service... Feb 13 07:14:53.162886 ignition[898]: Stage: kargs Feb 13 07:14:53.194558 systemd[1]: Started iscsid.service. Feb 13 07:14:53.162952 ignition[898]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:14:53.204976 systemd[1]: Starting dracut-initqueue.service... Feb 13 07:14:53.162963 ignition[898]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:14:53.224557 systemd[1]: Finished dracut-initqueue.service. Feb 13 07:14:53.164616 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:14:53.244458 systemd[1]: Reached target remote-fs-pre.target. Feb 13 07:14:53.166173 ignition[898]: kargs: kargs passed Feb 13 07:14:53.262532 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 07:14:53.166176 ignition[898]: POST message to Packet Timeline Feb 13 07:14:53.262578 systemd[1]: Reached target remote-fs.target. Feb 13 07:14:53.166187 ignition[898]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:14:53.316544 systemd[1]: Starting dracut-pre-mount.service... Feb 13 07:14:53.169961 ignition[898]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50214->[::1]:53: read: connection refused Feb 13 07:14:53.336662 systemd[1]: Finished dracut-pre-mount.service. Feb 13 07:14:53.370332 ignition[898]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 07:14:53.375834 systemd-networkd[876]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 07:14:53.370861 ignition[898]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54389->[::1]:53: read: connection refused Feb 13 07:14:53.405878 systemd-networkd[876]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 07:14:53.437002 systemd-networkd[876]: enp1s0f1np1: Link UP Feb 13 07:14:53.437608 systemd-networkd[876]: enp1s0f1np1: Gained carrier Feb 13 07:14:53.448912 systemd-networkd[876]: enp1s0f0np0: Link UP Feb 13 07:14:53.449444 systemd-networkd[876]: eno2: Link UP Feb 13 07:14:53.449931 systemd-networkd[876]: eno1: Link UP Feb 13 07:14:53.771125 ignition[898]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 07:14:53.772305 ignition[898]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59643->[::1]:53: read: connection refused Feb 13 07:14:54.186022 systemd-networkd[876]: enp1s0f0np0: Gained carrier Feb 13 07:14:54.195609 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Feb 13 07:14:54.223685 systemd-networkd[876]: enp1s0f0np0: DHCPv4 address 145.40.90.207/31, gateway 145.40.90.206 acquired from 145.40.83.140 Feb 13 07:14:54.572738 ignition[898]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 07:14:54.573931 ignition[898]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41558->[::1]:53: read: connection refused Feb 13 07:14:54.994997 systemd-networkd[876]: enp1s0f1np1: Gained IPv6LL Feb 13 07:14:55.570968 systemd-networkd[876]: enp1s0f0np0: Gained IPv6LL Feb 13 07:14:56.175653 ignition[898]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 07:14:56.176910 ignition[898]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59937->[::1]:53: read: connection refused Feb 13 07:14:59.380450 ignition[898]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 07:14:59.416396 ignition[898]: GET result: OK Feb 13 07:14:59.601457 ignition[898]: Ignition finished successfully Feb 13 07:14:59.605824 systemd[1]: Finished ignition-kargs.service. Feb 13 07:14:59.694828 kernel: kauditd_printk_skb: 3 callbacks suppressed Feb 13 07:14:59.694847 kernel: audit: type=1130 audit(1707808499.617:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:59.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:59.626515 ignition[926]: Ignition 2.14.0 Feb 13 07:14:59.619679 systemd[1]: Starting ignition-disks.service... Feb 13 07:14:59.626519 ignition[926]: Stage: disks Feb 13 07:14:59.626575 ignition[926]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:14:59.626585 ignition[926]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:14:59.627934 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:14:59.629631 ignition[926]: disks: disks passed Feb 13 07:14:59.629635 ignition[926]: POST message to Packet Timeline Feb 13 07:14:59.629644 ignition[926]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:14:59.652241 ignition[926]: GET result: OK Feb 13 07:14:59.870884 ignition[926]: Ignition finished successfully Feb 13 07:14:59.873780 systemd[1]: Finished ignition-disks.service. Feb 13 07:14:59.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:59.888016 systemd[1]: Reached target initrd-root-device.target. Feb 13 07:14:59.959618 kernel: audit: type=1130 audit(1707808499.887:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:14:59.959567 systemd[1]: Reached target local-fs-pre.target. Feb 13 07:14:59.973559 systemd[1]: Reached target local-fs.target. Feb 13 07:14:59.973672 systemd[1]: Reached target sysinit.target. Feb 13 07:14:59.987676 systemd[1]: Reached target basic.target. Feb 13 07:15:00.008404 systemd[1]: Starting systemd-fsck-root.service... Feb 13 07:15:00.030432 systemd-fsck[941]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 13 07:15:00.042975 systemd[1]: Finished systemd-fsck-root.service. Feb 13 07:15:00.134267 kernel: audit: type=1130 audit(1707808500.051:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:00.134281 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 13 07:15:00.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:00.057228 systemd[1]: Mounting sysroot.mount... Feb 13 07:15:00.142085 systemd[1]: Mounted sysroot.mount. Feb 13 07:15:00.156640 systemd[1]: Reached target initrd-root-fs.target. Feb 13 07:15:00.165339 systemd[1]: Mounting sysroot-usr.mount... Feb 13 07:15:00.180325 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 13 07:15:00.200204 systemd[1]: Starting flatcar-static-network.service... Feb 13 07:15:00.216514 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 07:15:00.216571 systemd[1]: Reached target ignition-diskful.target. Feb 13 07:15:00.234796 systemd[1]: Mounted sysroot-usr.mount. Feb 13 07:15:00.258462 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 07:15:00.273423 systemd[1]: Starting initrd-setup-root.service... Feb 13 07:15:00.403653 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (954) Feb 13 07:15:00.403671 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 07:15:00.403683 kernel: BTRFS info (device sda6): using free space tree Feb 13 07:15:00.403690 kernel: BTRFS info (device sda6): has skinny extents Feb 13 07:15:00.403697 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 07:15:00.403744 coreos-metadata[949]: Feb 13 07:15:00.328 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 07:15:00.403744 coreos-metadata[949]: Feb 13 07:15:00.371 INFO Fetch successful Feb 13 07:15:00.528285 kernel: audit: type=1130 audit(1707808500.412:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:00.528299 kernel: audit: type=1130 audit(1707808500.473:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:00.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:00.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:00.528362 coreos-metadata[948]: Feb 13 07:15:00.328 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 07:15:00.528362 coreos-metadata[948]: Feb 13 07:15:00.352 INFO Fetch successful Feb 13 07:15:00.528362 coreos-metadata[948]: Feb 13 07:15:00.370 INFO wrote hostname ci-3510.3.2-a-fe1fbff781 to /sysroot/etc/hostname Feb 13 07:15:00.665541 kernel: audit: type=1130 audit(1707808500.536:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:00.665555 kernel: audit: type=1131 audit(1707808500.536:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:00.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:00.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:00.328594 systemd[1]: Finished initrd-setup-root.service. Feb 13 07:15:00.692505 initrd-setup-root[959]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 07:15:00.413731 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 13 07:15:00.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:00.745531 initrd-setup-root[967]: cut: /sysroot/etc/group: No such file or directory Feb 13 07:15:00.783594 kernel: audit: type=1130 audit(1707808500.717:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:00.473666 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 07:15:00.793588 initrd-setup-root[975]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 07:15:00.473704 systemd[1]: Finished flatcar-static-network.service. Feb 13 07:15:00.811561 initrd-setup-root[983]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 07:15:00.821563 ignition[1024]: INFO : Ignition 2.14.0 Feb 13 07:15:00.821563 ignition[1024]: INFO : Stage: mount Feb 13 07:15:00.821563 ignition[1024]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:15:00.821563 ignition[1024]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:15:00.821563 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:15:00.821563 ignition[1024]: INFO : mount: mount passed Feb 13 07:15:00.821563 ignition[1024]: INFO : POST message to Packet Timeline Feb 13 07:15:00.821563 ignition[1024]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:15:00.821563 ignition[1024]: INFO : GET result: OK Feb 13 07:15:00.536650 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 07:15:00.657975 systemd[1]: Starting ignition-mount.service... Feb 13 07:15:00.684933 systemd[1]: Starting sysroot-boot.service... Feb 13 07:15:00.699922 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 13 07:15:00.699974 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 13 07:15:00.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:01.009820 ignition[1024]: INFO : Ignition finished successfully Feb 13 07:15:01.024581 kernel: audit: type=1130 audit(1707808500.952:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:00.704504 systemd[1]: Finished sysroot-boot.service. Feb 13 07:15:00.939384 systemd[1]: Finished ignition-mount.service. Feb 13 07:15:01.068482 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1041) Feb 13 07:15:01.068495 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 07:15:00.954533 systemd[1]: Starting ignition-files.service... Feb 13 07:15:01.116475 kernel: BTRFS info (device sda6): using free space tree Feb 13 07:15:01.116486 kernel: BTRFS info (device sda6): has skinny extents Feb 13 07:15:01.116495 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 07:15:01.018178 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 07:15:01.152438 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 07:15:01.176975 ignition[1060]: INFO : Ignition 2.14.0 Feb 13 07:15:01.176975 ignition[1060]: INFO : Stage: files Feb 13 07:15:01.190622 ignition[1060]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:15:01.190622 ignition[1060]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:15:01.190622 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:15:01.190622 ignition[1060]: DEBUG : files: compiled without relabeling support, skipping Feb 13 07:15:01.190622 ignition[1060]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 07:15:01.190622 ignition[1060]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 07:15:01.190622 ignition[1060]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 07:15:01.190622 ignition[1060]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 07:15:01.190622 ignition[1060]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 07:15:01.190622 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 07:15:01.190622 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 07:15:01.183730 unknown[1060]: wrote ssh authorized keys file for user: core Feb 13 07:15:01.366082 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 07:15:01.477490 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 07:15:01.494687 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 13 07:15:01.494687 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 13 07:15:01.949679 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 07:15:02.045888 ignition[1060]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 13 07:15:02.071634 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 13 07:15:02.071634 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 13 07:15:02.071634 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 13 07:15:02.460491 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 13 07:15:02.515172 ignition[1060]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 13 07:15:02.540636 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 13 07:15:02.540636 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 13 07:15:02.540636 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 13 07:15:02.650753 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 07:15:05.393741 ignition[1060]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 13 07:15:05.419701 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 13 07:15:05.419701 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 13 07:15:05.419701 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 13 07:15:05.505578 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 13 07:15:12.270811 ignition[1060]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 13 07:15:12.296723 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 13 07:15:12.296723 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 13 07:15:12.296723 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubectl: attempt #1 Feb 13 07:15:12.345609 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 13 07:15:13.669519 ignition[1060]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 857e67001e74840518413593d90c6e64ad3f00d55fa44ad9a8e2ed6135392c908caff7ec19af18cbe10784b8f83afe687a0bc3bacbc9eee984cdeb9c0749cb83 Feb 13 07:15:13.694598 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 13 07:15:13.694598 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 13 07:15:13.694598 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 13 07:15:13.694598 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 07:15:13.694598 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 07:15:14.072122 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 07:15:14.124007 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 07:15:14.124007 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 13 07:15:14.173623 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1062) Feb 13 07:15:14.173638 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 07:15:14.173638 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 07:15:14.173638 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 07:15:14.173638 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 07:15:14.173638 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 07:15:14.173638 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 07:15:14.173638 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 07:15:14.173638 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 07:15:14.173638 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 07:15:14.173638 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 07:15:14.173638 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 13 07:15:14.173638 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem476606031" Feb 13 07:15:14.173638 ignition[1060]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem476606031": device or resource busy Feb 13 07:15:14.173638 ignition[1060]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem476606031", trying btrfs: device or resource busy Feb 13 07:15:14.173638 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem476606031" Feb 13 07:15:14.488600 kernel: audit: type=1130 audit(1707808514.377:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.369303 systemd[1]: Finished ignition-files.service. Feb 13 07:15:14.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.557644 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem476606031" Feb 13 07:15:14.557644 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem476606031" Feb 13 07:15:14.557644 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem476606031" Feb 13 07:15:14.557644 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 07:15:14.557644 ignition[1060]: INFO : files: op(14): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 13 07:15:14.557644 ignition[1060]: INFO : files: op(14): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 13 07:15:14.557644 ignition[1060]: INFO : files: op(15): [started] processing unit "packet-phone-home.service" Feb 13 07:15:14.557644 ignition[1060]: INFO : files: op(15): [finished] processing unit "packet-phone-home.service" Feb 13 07:15:14.557644 ignition[1060]: INFO : files: op(16): [started] processing unit "prepare-cni-plugins.service" Feb 13 07:15:14.557644 ignition[1060]: INFO : files: op(16): op(17): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 07:15:14.557644 ignition[1060]: INFO : files: op(16): op(17): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 07:15:14.557644 ignition[1060]: INFO : files: op(16): [finished] processing unit "prepare-cni-plugins.service" Feb 13 07:15:14.557644 ignition[1060]: INFO : files: op(18): [started] processing unit "prepare-critools.service" Feb 13 07:15:14.557644 ignition[1060]: INFO : files: op(18): op(19): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 07:15:14.557644 ignition[1060]: INFO : files: op(18): op(19): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 07:15:14.557644 ignition[1060]: INFO : files: op(18): [finished] processing unit "prepare-critools.service" Feb 13 07:15:14.557644 ignition[1060]: INFO : files: op(1a): [started] processing unit "prepare-helm.service" Feb 13 07:15:14.557644 ignition[1060]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 07:15:15.220612 kernel: audit: type=1130 audit(1707808514.498:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.220636 kernel: audit: type=1130 audit(1707808514.565:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.220645 kernel: audit: type=1131 audit(1707808514.565:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.220652 kernel: audit: type=1130 audit(1707808514.748:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.220660 kernel: audit: type=1131 audit(1707808514.748:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.220673 kernel: audit: type=1130 audit(1707808514.956:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.220681 kernel: audit: type=1131 audit(1707808515.124:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.384336 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 13 07:15:15.237615 ignition[1060]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 07:15:15.237615 ignition[1060]: INFO : files: op(1a): [finished] processing unit "prepare-helm.service" Feb 13 07:15:15.237615 ignition[1060]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 07:15:15.237615 ignition[1060]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 07:15:15.237615 ignition[1060]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-critools.service" Feb 13 07:15:15.237615 ignition[1060]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-critools.service" Feb 13 07:15:15.237615 ignition[1060]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 07:15:15.237615 ignition[1060]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 07:15:15.237615 ignition[1060]: INFO : files: op(1f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 07:15:15.237615 ignition[1060]: INFO : files: op(1f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 07:15:15.237615 ignition[1060]: INFO : files: op(20): [started] setting preset to enabled for "packet-phone-home.service" Feb 13 07:15:15.237615 ignition[1060]: INFO : files: op(20): [finished] setting preset to enabled for "packet-phone-home.service" Feb 13 07:15:15.237615 ignition[1060]: INFO : files: createResultFile: createFiles: op(21): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 07:15:15.237615 ignition[1060]: INFO : files: createResultFile: createFiles: op(21): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 07:15:15.237615 ignition[1060]: INFO : files: files passed Feb 13 07:15:15.237615 ignition[1060]: INFO : POST message to Packet Timeline Feb 13 07:15:15.237615 ignition[1060]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:15:15.237615 ignition[1060]: INFO : GET result: OK Feb 13 07:15:15.237615 ignition[1060]: INFO : Ignition finished successfully Feb 13 07:15:15.723664 kernel: audit: type=1131 audit(1707808515.443:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.723750 kernel: audit: type=1131 audit(1707808515.534:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.724300 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 07:15:15.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.444623 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 13 07:15:15.771699 iscsid[908]: iscsid shutting down. Feb 13 07:15:14.445023 systemd[1]: Starting ignition-quench.service... Feb 13 07:15:14.465925 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 13 07:15:15.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.498899 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 07:15:15.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.498982 systemd[1]: Finished ignition-quench.service. Feb 13 07:15:15.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.565656 systemd[1]: Reached target ignition-complete.target. Feb 13 07:15:15.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.687069 systemd[1]: Starting initrd-parse-etc.service... Feb 13 07:15:15.875831 ignition[1108]: INFO : Ignition 2.14.0 Feb 13 07:15:15.875831 ignition[1108]: INFO : Stage: umount Feb 13 07:15:15.875831 ignition[1108]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:15:15.875831 ignition[1108]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:15:15.875831 ignition[1108]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:15:15.875831 ignition[1108]: INFO : umount: umount passed Feb 13 07:15:15.875831 ignition[1108]: INFO : POST message to Packet Timeline Feb 13 07:15:15.875831 ignition[1108]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:15:15.875831 ignition[1108]: INFO : GET result: OK Feb 13 07:15:15.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:16.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.729852 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 07:15:16.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:16.048100 ignition[1108]: INFO : Ignition finished successfully Feb 13 07:15:16.047000 audit: BPF prog-id=6 op=UNLOAD Feb 13 07:15:14.729907 systemd[1]: Finished initrd-parse-etc.service. Feb 13 07:15:16.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.748711 systemd[1]: Reached target initrd-fs.target. Feb 13 07:15:16.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.863557 systemd[1]: Reached target initrd.target. Feb 13 07:15:16.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.899610 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 13 07:15:16.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.900044 systemd[1]: Starting dracut-pre-pivot.service... Feb 13 07:15:14.929845 systemd[1]: Finished dracut-pre-pivot.service. Feb 13 07:15:16.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:14.957937 systemd[1]: Starting initrd-cleanup.service... Feb 13 07:15:16.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.025561 systemd[1]: Stopped target nss-lookup.target. Feb 13 07:15:16.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.055753 systemd[1]: Stopped target remote-cryptsetup.target. Feb 13 07:15:15.073849 systemd[1]: Stopped target timers.target. Feb 13 07:15:16.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.105891 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 07:15:15.106124 systemd[1]: Stopped dracut-pre-pivot.service. Feb 13 07:15:15.125378 systemd[1]: Stopped target initrd.target. Feb 13 07:15:16.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.202758 systemd[1]: Stopped target basic.target. Feb 13 07:15:16.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.227764 systemd[1]: Stopped target ignition-complete.target. Feb 13 07:15:16.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.245773 systemd[1]: Stopped target ignition-diskful.target. Feb 13 07:15:15.271859 systemd[1]: Stopped target initrd-root-device.target. Feb 13 07:15:16.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.293078 systemd[1]: Stopped target remote-fs.target. Feb 13 07:15:16.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.315078 systemd[1]: Stopped target remote-fs-pre.target. Feb 13 07:15:16.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.338105 systemd[1]: Stopped target sysinit.target. Feb 13 07:15:16.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:16.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.359098 systemd[1]: Stopped target local-fs.target. Feb 13 07:15:15.380085 systemd[1]: Stopped target local-fs-pre.target. Feb 13 07:15:15.401077 systemd[1]: Stopped target swap.target. Feb 13 07:15:15.420958 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 07:15:15.421322 systemd[1]: Stopped dracut-pre-mount.service. Feb 13 07:15:15.444312 systemd[1]: Stopped target cryptsetup.target. Feb 13 07:15:15.521736 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 07:15:15.521819 systemd[1]: Stopped dracut-initqueue.service. Feb 13 07:15:15.534868 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 07:15:15.534946 systemd[1]: Stopped ignition-fetch-offline.service. Feb 13 07:15:15.603785 systemd[1]: Stopped target paths.target. Feb 13 07:15:15.619817 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 07:15:15.623611 systemd[1]: Stopped systemd-ask-password-console.path. Feb 13 07:15:15.641818 systemd[1]: Stopped target slices.target. Feb 13 07:15:15.656851 systemd[1]: Stopped target sockets.target. Feb 13 07:15:16.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:15.678834 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 07:15:15.679010 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 13 07:15:15.699176 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 07:15:15.699563 systemd[1]: Stopped ignition-files.service. Feb 13 07:15:15.715178 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 07:15:15.715583 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 13 07:15:15.735308 systemd[1]: Stopping ignition-mount.service... Feb 13 07:15:15.754628 systemd[1]: Stopping iscsid.service... Feb 13 07:15:15.779131 systemd[1]: Stopping sysroot-boot.service... Feb 13 07:15:15.785532 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 07:15:15.785636 systemd[1]: Stopped systemd-udev-trigger.service. Feb 13 07:15:15.806666 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 07:15:15.806761 systemd[1]: Stopped dracut-pre-trigger.service. Feb 13 07:15:15.827179 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 07:15:15.828136 systemd[1]: iscsid.service: Deactivated successfully. Feb 13 07:15:15.828256 systemd[1]: Stopped iscsid.service. Feb 13 07:15:15.839696 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 07:15:16.594405 systemd-journald[267]: Received SIGTERM from PID 1 (n/a). Feb 13 07:15:15.839881 systemd[1]: Stopped sysroot-boot.service. Feb 13 07:15:15.853875 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 07:15:15.854126 systemd[1]: Closed iscsid.socket. Feb 13 07:15:15.867881 systemd[1]: Stopping iscsiuio.service... Feb 13 07:15:15.884159 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 13 07:15:15.884408 systemd[1]: Stopped iscsiuio.service. Feb 13 07:15:15.898284 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 07:15:15.898535 systemd[1]: Finished initrd-cleanup.service. Feb 13 07:15:15.914598 systemd[1]: Stopped target network.target. Feb 13 07:15:15.931661 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 07:15:15.931757 systemd[1]: Closed iscsiuio.socket. Feb 13 07:15:15.958977 systemd[1]: Stopping systemd-networkd.service... Feb 13 07:15:15.964517 systemd-networkd[876]: enp1s0f0np0: DHCPv6 lease lost Feb 13 07:15:15.972578 systemd-networkd[876]: enp1s0f1np1: DHCPv6 lease lost Feb 13 07:15:16.594000 audit: BPF prog-id=9 op=UNLOAD Feb 13 07:15:15.976842 systemd[1]: Stopping systemd-resolved.service... Feb 13 07:15:15.992191 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 07:15:15.992445 systemd[1]: Stopped systemd-resolved.service. Feb 13 07:15:16.001071 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 07:15:16.001439 systemd[1]: Stopped systemd-networkd.service. Feb 13 07:15:16.025172 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 07:15:16.025368 systemd[1]: Stopped ignition-mount.service. Feb 13 07:15:16.040037 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 07:15:16.040123 systemd[1]: Closed systemd-networkd.socket. Feb 13 07:15:16.055649 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 07:15:16.055765 systemd[1]: Stopped ignition-disks.service. Feb 13 07:15:16.071704 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 07:15:16.071816 systemd[1]: Stopped ignition-kargs.service. Feb 13 07:15:16.086691 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 07:15:16.086808 systemd[1]: Stopped ignition-setup.service. Feb 13 07:15:16.101785 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 07:15:16.101919 systemd[1]: Stopped initrd-setup-root.service. Feb 13 07:15:16.119486 systemd[1]: Stopping network-cleanup.service... Feb 13 07:15:16.136593 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 07:15:16.136743 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 13 07:15:16.153740 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 07:15:16.153890 systemd[1]: Stopped systemd-sysctl.service. Feb 13 07:15:16.170130 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 07:15:16.170271 systemd[1]: Stopped systemd-modules-load.service. Feb 13 07:15:16.186008 systemd[1]: Stopping systemd-udevd.service... Feb 13 07:15:16.204470 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 07:15:16.205955 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 07:15:16.206270 systemd[1]: Stopped systemd-udevd.service. Feb 13 07:15:16.219234 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 07:15:16.219382 systemd[1]: Closed systemd-udevd-control.socket. Feb 13 07:15:16.231737 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 07:15:16.231840 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 13 07:15:16.247783 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 07:15:16.247919 systemd[1]: Stopped dracut-pre-udev.service. Feb 13 07:15:16.262952 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 07:15:16.263102 systemd[1]: Stopped dracut-cmdline.service. Feb 13 07:15:16.277824 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 07:15:16.277949 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 13 07:15:16.295525 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 13 07:15:16.310581 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 07:15:16.310730 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 13 07:15:16.327015 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 07:15:16.327128 systemd[1]: Stopped kmod-static-nodes.service. Feb 13 07:15:16.342689 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 07:15:16.342809 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 13 07:15:16.359900 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 07:15:16.361083 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 07:15:16.361270 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 13 07:15:16.488995 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 07:15:16.489203 systemd[1]: Stopped network-cleanup.service. Feb 13 07:15:16.502936 systemd[1]: Reached target initrd-switch-root.target. Feb 13 07:15:16.519078 systemd[1]: Starting initrd-switch-root.service... Feb 13 07:15:16.548000 systemd[1]: Switching root. Feb 13 07:15:16.596915 systemd-journald[267]: Journal stopped Feb 13 07:15:20.593862 kernel: SELinux: Class mctp_socket not defined in policy. Feb 13 07:15:20.593876 kernel: SELinux: Class anon_inode not defined in policy. Feb 13 07:15:20.593885 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 13 07:15:20.593890 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 07:15:20.593908 kernel: SELinux: policy capability open_perms=1 Feb 13 07:15:20.593913 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 07:15:20.593919 kernel: SELinux: policy capability always_check_network=0 Feb 13 07:15:20.593925 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 07:15:20.593930 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 07:15:20.593936 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 07:15:20.593941 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 07:15:20.593946 systemd[1]: Successfully loaded SELinux policy in 319.563ms. Feb 13 07:15:20.593953 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.921ms. Feb 13 07:15:20.593959 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 07:15:20.593967 systemd[1]: Detected architecture x86-64. Feb 13 07:15:20.593972 systemd[1]: Detected first boot. Feb 13 07:15:20.593978 systemd[1]: Hostname set to . Feb 13 07:15:20.593988 systemd[1]: Initializing machine ID from random generator. Feb 13 07:15:20.594012 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 13 07:15:20.594018 systemd[1]: Populated /etc with preset unit settings. Feb 13 07:15:20.594024 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 07:15:20.594031 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 07:15:20.594038 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 07:15:20.594059 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 07:15:20.594065 systemd[1]: Stopped initrd-switch-root.service. Feb 13 07:15:20.594087 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 07:15:20.594093 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 13 07:15:20.594100 systemd[1]: Created slice system-addon\x2drun.slice. Feb 13 07:15:20.594107 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 13 07:15:20.594112 systemd[1]: Created slice system-getty.slice. Feb 13 07:15:20.594118 systemd[1]: Created slice system-modprobe.slice. Feb 13 07:15:20.594124 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 13 07:15:20.594137 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 13 07:15:20.594143 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 13 07:15:20.594149 systemd[1]: Created slice user.slice. Feb 13 07:15:20.594155 systemd[1]: Started systemd-ask-password-console.path. Feb 13 07:15:20.594162 systemd[1]: Started systemd-ask-password-wall.path. Feb 13 07:15:20.594168 systemd[1]: Set up automount boot.automount. Feb 13 07:15:20.594174 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 13 07:15:20.594180 systemd[1]: Stopped target initrd-switch-root.target. Feb 13 07:15:20.594188 systemd[1]: Stopped target initrd-fs.target. Feb 13 07:15:20.594194 systemd[1]: Stopped target initrd-root-fs.target. Feb 13 07:15:20.594200 systemd[1]: Reached target integritysetup.target. Feb 13 07:15:20.594206 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 07:15:20.594213 systemd[1]: Reached target remote-fs.target. Feb 13 07:15:20.594219 systemd[1]: Reached target slices.target. Feb 13 07:15:20.594225 systemd[1]: Reached target swap.target. Feb 13 07:15:20.594231 systemd[1]: Reached target torcx.target. Feb 13 07:15:20.594237 systemd[1]: Reached target veritysetup.target. Feb 13 07:15:20.594244 systemd[1]: Listening on systemd-coredump.socket. Feb 13 07:15:20.594250 systemd[1]: Listening on systemd-initctl.socket. Feb 13 07:15:20.594256 systemd[1]: Listening on systemd-networkd.socket. Feb 13 07:15:20.594267 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 07:15:20.594275 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 07:15:20.594281 systemd[1]: Listening on systemd-userdbd.socket. Feb 13 07:15:20.594287 systemd[1]: Mounting dev-hugepages.mount... Feb 13 07:15:20.594294 systemd[1]: Mounting dev-mqueue.mount... Feb 13 07:15:20.594300 systemd[1]: Mounting media.mount... Feb 13 07:15:20.594307 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 07:15:20.594313 systemd[1]: Mounting sys-kernel-debug.mount... Feb 13 07:15:20.594320 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 13 07:15:20.594326 systemd[1]: Mounting tmp.mount... Feb 13 07:15:20.594332 systemd[1]: Starting flatcar-tmpfiles.service... Feb 13 07:15:20.594338 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 13 07:15:20.594345 systemd[1]: Starting kmod-static-nodes.service... Feb 13 07:15:20.594351 systemd[1]: Starting modprobe@configfs.service... Feb 13 07:15:20.594357 systemd[1]: Starting modprobe@dm_mod.service... Feb 13 07:15:20.594364 systemd[1]: Starting modprobe@drm.service... Feb 13 07:15:20.594371 systemd[1]: Starting modprobe@efi_pstore.service... Feb 13 07:15:20.594377 systemd[1]: Starting modprobe@fuse.service... Feb 13 07:15:20.594391 kernel: fuse: init (API version 7.34) Feb 13 07:15:20.594401 systemd[1]: Starting modprobe@loop.service... Feb 13 07:15:20.594407 kernel: loop: module loaded Feb 13 07:15:20.594413 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 07:15:20.594420 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 07:15:20.594427 systemd[1]: Stopped systemd-fsck-root.service. Feb 13 07:15:20.594434 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 07:15:20.594440 kernel: kauditd_printk_skb: 60 callbacks suppressed Feb 13 07:15:20.594446 kernel: audit: type=1131 audit(1707808520.218:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.594452 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 07:15:20.594458 kernel: audit: type=1131 audit(1707808520.306:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.594464 systemd[1]: Stopped systemd-journald.service. Feb 13 07:15:20.594470 kernel: audit: type=1130 audit(1707808520.370:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.594477 kernel: audit: type=1131 audit(1707808520.370:106): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.594483 kernel: audit: type=1334 audit(1707808520.456:107): prog-id=15 op=LOAD Feb 13 07:15:20.594488 kernel: audit: type=1334 audit(1707808520.474:108): prog-id=16 op=LOAD Feb 13 07:15:20.594494 kernel: audit: type=1334 audit(1707808520.492:109): prog-id=17 op=LOAD Feb 13 07:15:20.594500 systemd[1]: Starting systemd-journald.service... Feb 13 07:15:20.594510 kernel: audit: type=1334 audit(1707808520.492:110): prog-id=13 op=UNLOAD Feb 13 07:15:20.594522 kernel: audit: type=1334 audit(1707808520.492:111): prog-id=14 op=UNLOAD Feb 13 07:15:20.594528 systemd[1]: Starting systemd-modules-load.service... Feb 13 07:15:20.594535 kernel: audit: type=1305 audit(1707808520.591:112): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 07:15:20.594543 systemd-journald[1259]: Journal started Feb 13 07:15:20.594568 systemd-journald[1259]: Runtime Journal (/run/log/journal/85f13e97ddbf4c73bde5f342b525941f) is 8.0M, max 640.1M, 632.1M free. Feb 13 07:15:17.011000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 07:15:17.293000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 07:15:17.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 07:15:17.295000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 07:15:17.295000 audit: BPF prog-id=10 op=LOAD Feb 13 07:15:17.295000 audit: BPF prog-id=10 op=UNLOAD Feb 13 07:15:17.295000 audit: BPF prog-id=11 op=LOAD Feb 13 07:15:17.295000 audit: BPF prog-id=11 op=UNLOAD Feb 13 07:15:17.410000 audit[1149]: AVC avc: denied { associate } for pid=1149 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 13 07:15:17.410000 audit[1149]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a58dc a1=c00002ce58 a2=c00002bb00 a3=32 items=0 ppid=1132 pid=1149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:15:17.410000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 07:15:17.438000 audit[1149]: AVC avc: denied { associate } for pid=1149 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 13 07:15:17.438000 audit[1149]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001180d5 a2=1ed a3=0 items=2 ppid=1132 pid=1149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:15:17.438000 audit: CWD cwd="/" Feb 13 07:15:17.438000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:17.438000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:17.438000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 07:15:18.962000 audit: BPF prog-id=12 op=LOAD Feb 13 07:15:18.962000 audit: BPF prog-id=3 op=UNLOAD Feb 13 07:15:18.962000 audit: BPF prog-id=13 op=LOAD Feb 13 07:15:18.962000 audit: BPF prog-id=14 op=LOAD Feb 13 07:15:18.962000 audit: BPF prog-id=4 op=UNLOAD Feb 13 07:15:18.962000 audit: BPF prog-id=5 op=UNLOAD Feb 13 07:15:18.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:19.015000 audit: BPF prog-id=12 op=UNLOAD Feb 13 07:15:19.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:19.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.456000 audit: BPF prog-id=15 op=LOAD Feb 13 07:15:20.474000 audit: BPF prog-id=16 op=LOAD Feb 13 07:15:20.492000 audit: BPF prog-id=17 op=LOAD Feb 13 07:15:20.492000 audit: BPF prog-id=13 op=UNLOAD Feb 13 07:15:20.492000 audit: BPF prog-id=14 op=UNLOAD Feb 13 07:15:20.591000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 07:15:17.406753 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 07:15:18.960610 systemd[1]: Queued start job for default target multi-user.target. Feb 13 07:15:17.407418 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:17Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 07:15:18.963314 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 07:15:17.407450 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:17Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 07:15:17.407491 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:17Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 13 07:15:17.407507 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:17Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 13 07:15:17.407549 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:17Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 13 07:15:17.407568 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:17Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 13 07:15:17.407826 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:17Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 13 07:15:17.407889 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:17Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 07:15:17.407909 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:17Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 07:15:17.408574 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:17Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 13 07:15:17.408630 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:17Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 13 07:15:17.408658 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:17Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 13 07:15:17.408680 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:17Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 13 07:15:17.408706 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:17Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 13 07:15:17.408727 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:17Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 13 07:15:18.614492 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:18Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 07:15:18.614632 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:18Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 07:15:18.614686 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:18Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 07:15:18.614778 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:18Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 07:15:18.614808 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:18Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 13 07:15:18.614841 /usr/lib/systemd/system-generators/torcx-generator[1149]: time="2024-02-13T07:15:18Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 13 07:15:20.591000 audit[1259]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffef53920b0 a2=4000 a3=7ffef539214c items=0 ppid=1 pid=1259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:15:20.591000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 13 07:15:20.631410 systemd[1]: Starting systemd-network-generator.service... Feb 13 07:15:20.677392 systemd[1]: Starting systemd-remount-fs.service... Feb 13 07:15:20.704433 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 07:15:20.747075 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 07:15:20.747174 systemd[1]: Stopped verity-setup.service. Feb 13 07:15:20.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.791427 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 07:15:20.810517 systemd[1]: Started systemd-journald.service. Feb 13 07:15:20.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.818898 systemd[1]: Mounted dev-hugepages.mount. Feb 13 07:15:20.825629 systemd[1]: Mounted dev-mqueue.mount. Feb 13 07:15:20.832625 systemd[1]: Mounted media.mount. Feb 13 07:15:20.839642 systemd[1]: Mounted sys-kernel-debug.mount. Feb 13 07:15:20.848647 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 13 07:15:20.858616 systemd[1]: Mounted tmp.mount. Feb 13 07:15:20.866690 systemd[1]: Finished flatcar-tmpfiles.service. Feb 13 07:15:20.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.875709 systemd[1]: Finished kmod-static-nodes.service. Feb 13 07:15:20.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.883730 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 07:15:20.883837 systemd[1]: Finished modprobe@configfs.service. Feb 13 07:15:20.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.892818 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 07:15:20.892955 systemd[1]: Finished modprobe@dm_mod.service. Feb 13 07:15:20.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.902949 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 07:15:20.903140 systemd[1]: Finished modprobe@drm.service. Feb 13 07:15:20.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.913225 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 07:15:20.913543 systemd[1]: Finished modprobe@efi_pstore.service. Feb 13 07:15:20.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.923217 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 07:15:20.923538 systemd[1]: Finished modprobe@fuse.service. Feb 13 07:15:20.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.932226 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 07:15:20.932562 systemd[1]: Finished modprobe@loop.service. Feb 13 07:15:20.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.942247 systemd[1]: Finished systemd-modules-load.service. Feb 13 07:15:20.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.952193 systemd[1]: Finished systemd-network-generator.service. Feb 13 07:15:20.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.962189 systemd[1]: Finished systemd-remount-fs.service. Feb 13 07:15:20.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.971183 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 07:15:20.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:20.980672 systemd[1]: Reached target network-pre.target. Feb 13 07:15:20.991268 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 13 07:15:21.002114 systemd[1]: Mounting sys-kernel-config.mount... Feb 13 07:15:21.009601 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 07:15:21.010546 systemd[1]: Starting systemd-hwdb-update.service... Feb 13 07:15:21.017977 systemd[1]: Starting systemd-journal-flush.service... Feb 13 07:15:21.021472 systemd-journald[1259]: Time spent on flushing to /var/log/journal/85f13e97ddbf4c73bde5f342b525941f is 15.235ms for 1618 entries. Feb 13 07:15:21.021472 systemd-journald[1259]: System Journal (/var/log/journal/85f13e97ddbf4c73bde5f342b525941f) is 8.0M, max 195.6M, 187.6M free. Feb 13 07:15:21.060745 systemd-journald[1259]: Received client request to flush runtime journal. Feb 13 07:15:21.034495 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 07:15:21.035034 systemd[1]: Starting systemd-random-seed.service... Feb 13 07:15:21.049530 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 13 07:15:21.050037 systemd[1]: Starting systemd-sysctl.service... Feb 13 07:15:21.057125 systemd[1]: Starting systemd-sysusers.service... Feb 13 07:15:21.064984 systemd[1]: Starting systemd-udev-settle.service... Feb 13 07:15:21.072618 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 13 07:15:21.080556 systemd[1]: Mounted sys-kernel-config.mount. Feb 13 07:15:21.088594 systemd[1]: Finished systemd-journal-flush.service. Feb 13 07:15:21.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:21.096665 systemd[1]: Finished systemd-random-seed.service. Feb 13 07:15:21.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:21.104599 systemd[1]: Finished systemd-sysctl.service. Feb 13 07:15:21.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:21.112600 systemd[1]: Finished systemd-sysusers.service. Feb 13 07:15:21.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:21.121585 systemd[1]: Reached target first-boot-complete.target. Feb 13 07:15:21.131181 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 07:15:21.140506 udevadm[1275]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 07:15:21.149114 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 07:15:21.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:21.314561 systemd[1]: Finished systemd-hwdb-update.service. Feb 13 07:15:21.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:21.323000 audit: BPF prog-id=18 op=LOAD Feb 13 07:15:21.323000 audit: BPF prog-id=19 op=LOAD Feb 13 07:15:21.324000 audit: BPF prog-id=7 op=UNLOAD Feb 13 07:15:21.324000 audit: BPF prog-id=8 op=UNLOAD Feb 13 07:15:21.324727 systemd[1]: Starting systemd-udevd.service... Feb 13 07:15:21.336585 systemd-udevd[1278]: Using default interface naming scheme 'v252'. Feb 13 07:15:21.356040 systemd[1]: Started systemd-udevd.service. Feb 13 07:15:21.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:21.368000 audit: BPF prog-id=20 op=LOAD Feb 13 07:15:21.367790 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Feb 13 07:15:21.369382 systemd[1]: Starting systemd-networkd.service... Feb 13 07:15:21.407000 audit: BPF prog-id=21 op=LOAD Feb 13 07:15:21.425408 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 13 07:15:21.425464 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 07:15:21.425481 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1350) Feb 13 07:15:21.425494 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 07:15:21.425000 audit: BPF prog-id=22 op=LOAD Feb 13 07:15:21.470000 audit: BPF prog-id=23 op=LOAD Feb 13 07:15:21.414000 audit[1292]: AVC avc: denied { confidentiality } for pid=1292 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 07:15:21.475093 systemd[1]: Starting systemd-userdbd.service... Feb 13 07:15:21.497416 kernel: ACPI: button: Power Button [PWRF] Feb 13 07:15:21.510554 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 07:15:21.528400 kernel: IPMI message handler: version 39.2 Feb 13 07:15:21.528601 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 07:15:21.414000 audit[1292]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5601a26331d0 a1=4d8bc a2=7f98966bcbc5 a3=5 items=42 ppid=1278 pid=1292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:15:21.414000 audit: CWD cwd="/" Feb 13 07:15:21.414000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=1 name=(null) inode=15500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=2 name=(null) inode=15500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=3 name=(null) inode=15501 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=4 name=(null) inode=15500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=5 name=(null) inode=15502 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=6 name=(null) inode=15500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=7 name=(null) inode=15503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=8 name=(null) inode=15503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=9 name=(null) inode=15504 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=10 name=(null) inode=15503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=11 name=(null) inode=15505 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=12 name=(null) inode=15503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=13 name=(null) inode=15506 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=14 name=(null) inode=15503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=15 name=(null) inode=15507 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=16 name=(null) inode=15503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=17 name=(null) inode=15508 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=18 name=(null) inode=15500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=19 name=(null) inode=15509 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=20 name=(null) inode=15509 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=21 name=(null) inode=15510 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=22 name=(null) inode=15509 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=23 name=(null) inode=15511 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=24 name=(null) inode=15509 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=25 name=(null) inode=15512 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=26 name=(null) inode=15509 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=27 name=(null) inode=15513 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=28 name=(null) inode=15509 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=29 name=(null) inode=15514 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=30 name=(null) inode=15500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=31 name=(null) inode=15515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=32 name=(null) inode=15515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=33 name=(null) inode=15516 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=34 name=(null) inode=15515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=35 name=(null) inode=15517 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=36 name=(null) inode=15515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=37 name=(null) inode=15518 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=38 name=(null) inode=15515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=39 name=(null) inode=15519 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=40 name=(null) inode=15515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PATH item=41 name=(null) inode=15520 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:15:21.414000 audit: PROCTITLE proctitle="(udev-worker)" Feb 13 07:15:21.564773 systemd[1]: Started systemd-userdbd.service. Feb 13 07:15:21.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:21.583392 kernel: ipmi device interface Feb 13 07:15:21.583418 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 13 07:15:21.603394 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 13 07:15:21.624421 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 13 07:15:21.624566 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 13 07:15:21.624644 kernel: i2c i2c-0: 1/4 memory slots populated (from DMI) Feb 13 07:15:21.719392 kernel: iTCO_vendor_support: vendor-support=0 Feb 13 07:15:21.719431 kernel: ipmi_si: IPMI System Interface driver Feb 13 07:15:21.756754 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 13 07:15:21.756985 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 13 07:15:21.777306 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 13 07:15:21.815600 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 13 07:15:21.815726 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 13 07:15:21.882279 systemd-networkd[1334]: bond0: netdev ready Feb 13 07:15:21.885133 systemd-networkd[1334]: lo: Link UP Feb 13 07:15:21.885135 systemd-networkd[1334]: lo: Gained carrier Feb 13 07:15:21.885788 systemd-networkd[1334]: Enumeration completed Feb 13 07:15:21.885899 systemd[1]: Started systemd-networkd.service. Feb 13 07:15:21.886218 systemd-networkd[1334]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 13 07:15:21.887689 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Feb 13 07:15:21.887823 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 13 07:15:21.887885 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Feb 13 07:15:21.887942 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 13 07:15:21.894617 systemd-networkd[1334]: enp1s0f1np1: Configuring with /etc/systemd/network/10-b8:59:9f:de:84:bd.network. Feb 13 07:15:21.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:21.949482 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 13 07:15:22.007299 kernel: intel_rapl_common: Found RAPL domain package Feb 13 07:15:22.007350 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 13 07:15:22.007443 kernel: intel_rapl_common: Found RAPL domain core Feb 13 07:15:22.007455 kernel: intel_rapl_common: Found RAPL domain dram Feb 13 07:15:22.024424 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Feb 13 07:15:22.129394 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 13 07:15:22.149394 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 13 07:15:22.154624 systemd[1]: Finished systemd-udev-settle.service. Feb 13 07:15:22.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:22.164127 systemd[1]: Starting lvm2-activation-early.service... Feb 13 07:15:22.179765 lvm[1384]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 07:15:22.208822 systemd[1]: Finished lvm2-activation-early.service. Feb 13 07:15:22.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:22.217533 systemd[1]: Reached target cryptsetup.target. Feb 13 07:15:22.226073 systemd[1]: Starting lvm2-activation.service... Feb 13 07:15:22.228111 lvm[1385]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 07:15:22.261818 systemd[1]: Finished lvm2-activation.service. Feb 13 07:15:22.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:22.270519 systemd[1]: Reached target local-fs-pre.target. Feb 13 07:15:22.278471 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 07:15:22.278485 systemd[1]: Reached target local-fs.target. Feb 13 07:15:22.286486 systemd[1]: Reached target machines.target. Feb 13 07:15:22.295063 systemd[1]: Starting ldconfig.service... Feb 13 07:15:22.302082 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 13 07:15:22.302102 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 07:15:22.302623 systemd[1]: Starting systemd-boot-update.service... Feb 13 07:15:22.309899 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 13 07:15:22.319963 systemd[1]: Starting systemd-machine-id-commit.service... Feb 13 07:15:22.320048 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 13 07:15:22.320073 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 13 07:15:22.320565 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 13 07:15:22.320744 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1387 (bootctl) Feb 13 07:15:22.321404 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 13 07:15:22.329938 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 13 07:15:22.337887 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 07:15:22.340824 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 13 07:15:22.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:22.355044 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 07:15:22.442430 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 07:15:22.466431 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Feb 13 07:15:22.468619 systemd-networkd[1334]: enp1s0f0np0: Configuring with /etc/systemd/network/10-b8:59:9f:de:84:bc.network. Feb 13 07:15:22.535420 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 07:15:22.625426 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 07:15:22.649399 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Feb 13 07:15:22.649461 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 07:15:22.649484 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 13 07:15:22.688798 systemd-networkd[1334]: bond0: Link UP Feb 13 07:15:22.688977 systemd-networkd[1334]: enp1s0f1np1: Link UP Feb 13 07:15:22.689097 systemd-networkd[1334]: enp1s0f1np1: Gained carrier Feb 13 07:15:22.690063 systemd-networkd[1334]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:59:9f:de:84:bc.network. Feb 13 07:15:22.742417 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 07:15:22.763391 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 07:15:22.765285 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 07:15:22.765649 systemd[1]: Finished systemd-machine-id-commit.service. Feb 13 07:15:22.784267 systemd-fsck[1395]: fsck.fat 4.2 (2021-01-31) Feb 13 07:15:22.784267 systemd-fsck[1395]: /dev/sda1: 789 files, 115339/258078 clusters Feb 13 07:15:22.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:22.785423 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 07:15:22.800923 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 13 07:15:22.807392 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 07:15:22.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:22.824237 systemd[1]: Mounting boot.mount... Feb 13 07:15:22.828437 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 07:15:22.847434 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 07:15:22.852270 systemd[1]: Mounted boot.mount. Feb 13 07:15:22.866394 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 07:15:22.886394 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 07:15:22.891827 systemd[1]: Finished systemd-boot-update.service. Feb 13 07:15:22.904416 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 07:15:22.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:22.921943 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 13 07:15:22.923390 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 07:15:22.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:15:22.939251 systemd[1]: Starting audit-rules.service... Feb 13 07:15:22.942391 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 07:15:22.958081 systemd[1]: Starting clean-ca-certificates.service... Feb 13 07:15:22.959000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 13 07:15:22.959000 audit[1413]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3f12d820 a2=420 a3=0 items=0 ppid=1398 pid=1413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:15:22.959000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 13 07:15:22.959621 augenrules[1413]: No rules Feb 13 07:15:22.961427 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 07:15:22.970678 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 07:15:22.970702 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 07:15:22.976880 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 13 07:15:23.014428 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 07:15:23.014582 systemd[1]: Starting systemd-resolved.service... Feb 13 07:15:23.031244 ldconfig[1386]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 07:15:23.033156 systemd[1]: Starting systemd-timesyncd.service... Feb 13 07:15:23.033391 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 07:15:23.033408 systemd-networkd[1334]: enp1s0f0np0: Link UP Feb 13 07:15:23.033569 systemd-networkd[1334]: bond0: Gained carrier Feb 13 07:15:23.033659 systemd-networkd[1334]: enp1s0f0np0: Gained carrier Feb 13 07:15:23.048985 systemd[1]: Starting systemd-update-utmp.service... Feb 13 07:15:23.051436 kernel: bond0: (slave enp1s0f1np1): link status down again after 200 ms Feb 13 07:15:23.051466 kernel: bond0: (slave enp1s0f1np1): link status definitely down, disabling slave Feb 13 07:15:23.051480 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 07:15:23.082736 systemd[1]: Finished ldconfig.service. Feb 13 07:15:23.100759 systemd-networkd[1334]: enp1s0f1np1: Link DOWN Feb 13 07:15:23.100762 systemd-networkd[1334]: enp1s0f1np1: Lost carrier Feb 13 07:15:23.101436 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex Feb 13 07:15:23.101460 kernel: bond0: active interface up! Feb 13 07:15:23.119611 systemd[1]: Finished audit-rules.service. Feb 13 07:15:23.126582 systemd[1]: Finished clean-ca-certificates.service. Feb 13 07:15:23.134584 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 13 07:15:23.146348 systemd[1]: Starting systemd-update-done.service... Feb 13 07:15:23.153467 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 07:15:23.153712 systemd[1]: Finished systemd-update-utmp.service. Feb 13 07:15:23.161624 systemd[1]: Finished systemd-update-done.service. Feb 13 07:15:23.171762 systemd[1]: Started systemd-timesyncd.service. Feb 13 07:15:23.173158 systemd-resolved[1420]: Positive Trust Anchors: Feb 13 07:15:23.173163 systemd-resolved[1420]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 07:15:23.173182 systemd-resolved[1420]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 07:15:23.176944 systemd-resolved[1420]: Using system hostname 'ci-3510.3.2-a-fe1fbff781'. Feb 13 07:15:23.179665 systemd[1]: Reached target time-set.target. Feb 13 07:15:23.255419 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 07:15:23.257221 systemd-networkd[1334]: enp1s0f1np1: Link UP Feb 13 07:15:23.257361 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Feb 13 07:15:23.257401 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Feb 13 07:15:23.257497 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Feb 13 07:15:23.259119 systemd-networkd[1334]: enp1s0f1np1: Gained carrier Feb 13 07:15:23.259670 systemd[1]: Started systemd-resolved.service. Feb 13 07:15:23.267507 systemd[1]: Reached target network.target. Feb 13 07:15:23.271581 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Feb 13 07:15:23.275487 systemd[1]: Reached target nss-lookup.target. Feb 13 07:15:23.283487 systemd[1]: Reached target sysinit.target. Feb 13 07:15:23.291524 systemd[1]: Started motdgen.path. Feb 13 07:15:23.298493 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 13 07:15:23.313941 systemd[1]: Started logrotate.timer. Feb 13 07:15:23.322391 kernel: bond0: (slave enp1s0f1np1): link status up, enabling it in 200 ms Feb 13 07:15:23.322411 kernel: bond0: (slave enp1s0f1np1): invalid new link 3 on slave Feb 13 07:15:23.343512 systemd[1]: Started mdadm.timer. Feb 13 07:15:23.350472 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 13 07:15:23.358600 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 07:15:23.358671 systemd[1]: Reached target paths.target. Feb 13 07:15:23.365466 systemd[1]: Reached target timers.target. Feb 13 07:15:23.372598 systemd[1]: Listening on dbus.socket. Feb 13 07:15:23.380039 systemd[1]: Starting docker.socket... Feb 13 07:15:23.387839 systemd[1]: Listening on sshd.socket. Feb 13 07:15:23.394534 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 07:15:23.394761 systemd[1]: Listening on docker.socket. Feb 13 07:15:23.401521 systemd[1]: Reached target sockets.target. Feb 13 07:15:23.409471 systemd[1]: Reached target basic.target. Feb 13 07:15:23.416486 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 07:15:23.416498 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 07:15:23.416933 systemd[1]: Starting containerd.service... Feb 13 07:15:23.423878 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 13 07:15:23.432916 systemd[1]: Starting coreos-metadata.service... Feb 13 07:15:23.439961 systemd[1]: Starting dbus.service... Feb 13 07:15:23.445915 systemd[1]: Starting enable-oem-cloudinit.service... Feb 13 07:15:23.451313 jq[1436]: false Feb 13 07:15:23.452970 systemd[1]: Starting extend-filesystems.service... Feb 13 07:15:23.453822 coreos-metadata[1429]: Feb 13 07:15:23.453 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 07:15:23.457759 dbus-daemon[1435]: [system] SELinux support is enabled Feb 13 07:15:23.459503 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 13 07:15:23.460052 systemd[1]: Starting motdgen.service... Feb 13 07:15:23.461002 extend-filesystems[1437]: Found sda Feb 13 07:15:23.461002 extend-filesystems[1437]: Found sda1 Feb 13 07:15:23.487348 extend-filesystems[1437]: Found sda2 Feb 13 07:15:23.487348 extend-filesystems[1437]: Found sda3 Feb 13 07:15:23.487348 extend-filesystems[1437]: Found usr Feb 13 07:15:23.487348 extend-filesystems[1437]: Found sda4 Feb 13 07:15:23.487348 extend-filesystems[1437]: Found sda6 Feb 13 07:15:23.487348 extend-filesystems[1437]: Found sda7 Feb 13 07:15:23.487348 extend-filesystems[1437]: Found sda9 Feb 13 07:15:23.487348 extend-filesystems[1437]: Checking size of /dev/sda9 Feb 13 07:15:23.487348 extend-filesystems[1437]: Resized partition /dev/sda9 Feb 13 07:15:23.606450 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Feb 13 07:15:23.606478 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Feb 13 07:15:23.606517 coreos-metadata[1432]: Feb 13 07:15:23.462 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 07:15:23.467328 systemd[1]: Starting prepare-cni-plugins.service... Feb 13 07:15:23.606689 extend-filesystems[1453]: resize2fs 1.46.5 (30-Dec-2021) Feb 13 07:15:23.500191 systemd[1]: Starting prepare-critools.service... Feb 13 07:15:23.508030 systemd[1]: Starting prepare-helm.service... Feb 13 07:15:23.521034 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 13 07:15:23.546955 systemd[1]: Starting sshd-keygen.service... Feb 13 07:15:23.568690 systemd[1]: Starting systemd-logind.service... Feb 13 07:15:23.575474 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 07:15:23.575988 systemd[1]: Starting tcsd.service... Feb 13 07:15:23.588594 systemd-logind[1466]: Watching system buttons on /dev/input/event3 (Power Button) Feb 13 07:15:23.622177 jq[1469]: true Feb 13 07:15:23.588603 systemd-logind[1466]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 07:15:23.588612 systemd-logind[1466]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 13 07:15:23.588743 systemd-logind[1466]: New seat seat0. Feb 13 07:15:23.598814 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 07:15:23.599171 systemd[1]: Starting update-engine.service... Feb 13 07:15:23.614003 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 13 07:15:23.629786 systemd[1]: Started dbus.service. Feb 13 07:15:23.638156 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 07:15:23.638241 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 13 07:15:23.638416 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 07:15:23.638495 systemd[1]: Finished motdgen.service. Feb 13 07:15:23.639640 update_engine[1468]: I0213 07:15:23.639214 1468 main.cc:92] Flatcar Update Engine starting Feb 13 07:15:23.642207 update_engine[1468]: I0213 07:15:23.642199 1468 update_check_scheduler.cc:74] Next update check in 10m34s Feb 13 07:15:23.646525 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 07:15:23.646607 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 13 07:15:23.650934 tar[1471]: ./ Feb 13 07:15:23.650934 tar[1471]: ./loopback Feb 13 07:15:23.657266 jq[1477]: true Feb 13 07:15:23.657611 dbus-daemon[1435]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 07:15:23.658554 tar[1472]: crictl Feb 13 07:15:23.659838 tar[1473]: linux-amd64/helm Feb 13 07:15:23.663124 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 13 07:15:23.663231 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 13 07:15:23.663324 systemd[1]: Started update-engine.service. Feb 13 07:15:23.667017 env[1478]: time="2024-02-13T07:15:23.666992821Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 13 07:15:23.670696 tar[1471]: ./bandwidth Feb 13 07:15:23.674225 systemd[1]: Started systemd-logind.service. Feb 13 07:15:23.675293 env[1478]: time="2024-02-13T07:15:23.675277254Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 07:15:23.675750 env[1478]: time="2024-02-13T07:15:23.675739215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 07:15:23.676354 env[1478]: time="2024-02-13T07:15:23.676338246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 07:15:23.676382 env[1478]: time="2024-02-13T07:15:23.676354055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 07:15:23.676482 env[1478]: time="2024-02-13T07:15:23.676472054Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 07:15:23.676505 env[1478]: time="2024-02-13T07:15:23.676482801Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 07:15:23.676505 env[1478]: time="2024-02-13T07:15:23.676490437Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 13 07:15:23.676505 env[1478]: time="2024-02-13T07:15:23.676495804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 07:15:23.676564 env[1478]: time="2024-02-13T07:15:23.676536416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 07:15:23.678357 env[1478]: time="2024-02-13T07:15:23.678340332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 07:15:23.678430 env[1478]: time="2024-02-13T07:15:23.678419926Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 07:15:23.679900 env[1478]: time="2024-02-13T07:15:23.678430675Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 07:15:23.680354 env[1478]: time="2024-02-13T07:15:23.680342268Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 13 07:15:23.680376 env[1478]: time="2024-02-13T07:15:23.680354291Z" level=info msg="metadata content store policy set" policy=shared Feb 13 07:15:23.684201 systemd[1]: Started locksmithd.service. Feb 13 07:15:23.689619 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Feb 13 07:15:23.690511 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 07:15:23.690594 systemd[1]: Reached target system-config.target. Feb 13 07:15:23.693343 env[1478]: time="2024-02-13T07:15:23.693320247Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 07:15:23.693383 env[1478]: time="2024-02-13T07:15:23.693355651Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 07:15:23.693383 env[1478]: time="2024-02-13T07:15:23.693369125Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 07:15:23.693430 env[1478]: time="2024-02-13T07:15:23.693395598Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 07:15:23.693430 env[1478]: time="2024-02-13T07:15:23.693407694Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 07:15:23.693430 env[1478]: time="2024-02-13T07:15:23.693420764Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 07:15:23.693504 env[1478]: time="2024-02-13T07:15:23.693432154Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 07:15:23.693504 env[1478]: time="2024-02-13T07:15:23.693442960Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 07:15:23.693504 env[1478]: time="2024-02-13T07:15:23.693454441Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 13 07:15:23.693504 env[1478]: time="2024-02-13T07:15:23.693464772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 07:15:23.693504 env[1478]: time="2024-02-13T07:15:23.693479685Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 07:15:23.693504 env[1478]: time="2024-02-13T07:15:23.693488809Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 07:15:23.693604 env[1478]: time="2024-02-13T07:15:23.693555937Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 07:15:23.693623 env[1478]: time="2024-02-13T07:15:23.693611598Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 07:15:23.693774 env[1478]: time="2024-02-13T07:15:23.693751377Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 07:15:23.693774 env[1478]: time="2024-02-13T07:15:23.693769214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 07:15:23.693821 env[1478]: time="2024-02-13T07:15:23.693778151Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 07:15:23.693821 env[1478]: time="2024-02-13T07:15:23.693810665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 07:15:23.693821 env[1478]: time="2024-02-13T07:15:23.693820011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 07:15:23.693867 env[1478]: time="2024-02-13T07:15:23.693827011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 07:15:23.693867 env[1478]: time="2024-02-13T07:15:23.693833138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 07:15:23.693867 env[1478]: time="2024-02-13T07:15:23.693839663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 07:15:23.693867 env[1478]: time="2024-02-13T07:15:23.693846603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 07:15:23.693867 env[1478]: time="2024-02-13T07:15:23.693859241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 07:15:23.693948 env[1478]: time="2024-02-13T07:15:23.693869712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 07:15:23.693948 env[1478]: time="2024-02-13T07:15:23.693877511Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 07:15:23.693984 env[1478]: time="2024-02-13T07:15:23.693953576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 07:15:23.693984 env[1478]: time="2024-02-13T07:15:23.693963710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 07:15:23.693984 env[1478]: time="2024-02-13T07:15:23.693970266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 07:15:23.693984 env[1478]: time="2024-02-13T07:15:23.693976408Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 07:15:23.694045 env[1478]: time="2024-02-13T07:15:23.693985187Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 13 07:15:23.694045 env[1478]: time="2024-02-13T07:15:23.693994340Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 07:15:23.694045 env[1478]: time="2024-02-13T07:15:23.694009708Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 13 07:15:23.694045 env[1478]: time="2024-02-13T07:15:23.694032125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 07:15:23.694183 env[1478]: time="2024-02-13T07:15:23.694156810Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 07:15:23.695972 env[1478]: time="2024-02-13T07:15:23.694191724Z" level=info msg="Connect containerd service" Feb 13 07:15:23.695972 env[1478]: time="2024-02-13T07:15:23.694217041Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 07:15:23.695972 env[1478]: time="2024-02-13T07:15:23.694516560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 07:15:23.695972 env[1478]: time="2024-02-13T07:15:23.694838494Z" level=info msg="Start subscribing containerd event" Feb 13 07:15:23.695972 env[1478]: time="2024-02-13T07:15:23.694868425Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 07:15:23.695972 env[1478]: time="2024-02-13T07:15:23.694906071Z" level=info msg="Start recovering state" Feb 13 07:15:23.695972 env[1478]: time="2024-02-13T07:15:23.694921440Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 07:15:23.695972 env[1478]: time="2024-02-13T07:15:23.694970637Z" level=info msg="containerd successfully booted in 0.028308s" Feb 13 07:15:23.695972 env[1478]: time="2024-02-13T07:15:23.694974608Z" level=info msg="Start event monitor" Feb 13 07:15:23.695972 env[1478]: time="2024-02-13T07:15:23.694987231Z" level=info msg="Start snapshots syncer" Feb 13 07:15:23.695972 env[1478]: time="2024-02-13T07:15:23.694999429Z" level=info msg="Start cni network conf syncer for default" Feb 13 07:15:23.695972 env[1478]: time="2024-02-13T07:15:23.695037636Z" level=info msg="Start streaming server" Feb 13 07:15:23.698550 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 07:15:23.698668 systemd[1]: Reached target user-config.target. Feb 13 07:15:23.706401 tar[1471]: ./ptp Feb 13 07:15:23.709006 systemd[1]: Started containerd.service. Feb 13 07:15:23.715754 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 13 07:15:23.729240 tar[1471]: ./vlan Feb 13 07:15:23.741430 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 07:15:23.751540 tar[1471]: ./host-device Feb 13 07:15:23.772915 tar[1471]: ./tuning Feb 13 07:15:23.791789 tar[1471]: ./vrf Feb 13 07:15:23.811513 tar[1471]: ./sbr Feb 13 07:15:23.830850 tar[1471]: ./tap Feb 13 07:15:23.852915 tar[1471]: ./dhcp Feb 13 07:15:23.909194 tar[1471]: ./static Feb 13 07:15:23.914852 tar[1473]: linux-amd64/LICENSE Feb 13 07:15:23.914906 tar[1473]: linux-amd64/README.md Feb 13 07:15:23.917529 systemd[1]: Finished prepare-helm.service. Feb 13 07:15:23.925098 tar[1471]: ./firewall Feb 13 07:15:23.928217 systemd[1]: Finished prepare-critools.service. Feb 13 07:15:23.949516 tar[1471]: ./macvlan Feb 13 07:15:23.971525 tar[1471]: ./dummy Feb 13 07:15:23.976433 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Feb 13 07:15:24.004630 extend-filesystems[1453]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 07:15:24.004630 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 13 07:15:24.004630 extend-filesystems[1453]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Feb 13 07:15:24.044439 extend-filesystems[1437]: Resized filesystem in /dev/sda9 Feb 13 07:15:24.044439 extend-filesystems[1437]: Found sdb Feb 13 07:15:24.059444 tar[1471]: ./bridge Feb 13 07:15:24.059444 tar[1471]: ./ipvlan Feb 13 07:15:24.005057 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 07:15:24.005145 systemd[1]: Finished extend-filesystems.service. Feb 13 07:15:24.067195 tar[1471]: ./portmap Feb 13 07:15:24.087926 tar[1471]: ./host-local Feb 13 07:15:24.111845 systemd[1]: Finished prepare-cni-plugins.service. Feb 13 07:15:24.260948 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 07:15:24.273003 systemd[1]: Finished sshd-keygen.service. Feb 13 07:15:24.281188 systemd[1]: Starting issuegen.service... Feb 13 07:15:24.288628 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 07:15:24.288717 systemd[1]: Finished issuegen.service. Feb 13 07:15:24.297152 systemd[1]: Starting systemd-user-sessions.service... Feb 13 07:15:24.306642 systemd[1]: Finished systemd-user-sessions.service. Feb 13 07:15:24.307499 systemd-networkd[1334]: bond0: Gained IPv6LL Feb 13 07:15:24.307687 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Feb 13 07:15:24.316103 systemd[1]: Started getty@tty1.service. Feb 13 07:15:24.324028 systemd[1]: Started serial-getty@ttyS1.service. Feb 13 07:15:24.332544 systemd[1]: Reached target getty.target. Feb 13 07:15:25.074801 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Feb 13 07:15:25.074919 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Feb 13 07:15:25.341480 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 13 07:15:29.395226 login[1537]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 13 07:15:29.401734 login[1538]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 07:15:29.419526 systemd[1]: Created slice user-500.slice. Feb 13 07:15:29.420055 systemd[1]: Starting user-runtime-dir@500.service... Feb 13 07:15:29.421066 systemd-logind[1466]: New session 1 of user core. Feb 13 07:15:29.425275 systemd[1]: Finished user-runtime-dir@500.service. Feb 13 07:15:29.425990 systemd[1]: Starting user@500.service... Feb 13 07:15:29.428135 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:15:29.502050 systemd[1542]: Queued start job for default target default.target. Feb 13 07:15:29.502287 systemd[1542]: Reached target paths.target. Feb 13 07:15:29.502304 systemd[1542]: Reached target sockets.target. Feb 13 07:15:29.502317 systemd[1542]: Reached target timers.target. Feb 13 07:15:29.502329 systemd[1542]: Reached target basic.target. Feb 13 07:15:29.502357 systemd[1542]: Reached target default.target. Feb 13 07:15:29.502378 systemd[1542]: Startup finished in 71ms. Feb 13 07:15:29.502415 systemd[1]: Started user@500.service. Feb 13 07:15:29.502954 systemd[1]: Started session-1.scope. Feb 13 07:15:29.619831 coreos-metadata[1429]: Feb 13 07:15:29.619 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 13 07:15:29.620605 coreos-metadata[1432]: Feb 13 07:15:29.619 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 13 07:15:30.396013 login[1537]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 07:15:30.406591 systemd-logind[1466]: New session 2 of user core. Feb 13 07:15:30.409099 systemd[1]: Started session-2.scope. Feb 13 07:15:30.620262 coreos-metadata[1432]: Feb 13 07:15:30.620 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 07:15:30.620534 coreos-metadata[1429]: Feb 13 07:15:30.620 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 07:15:30.840669 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Feb 13 07:15:30.840820 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Feb 13 07:15:31.508946 systemd[1]: Created slice system-sshd.slice. Feb 13 07:15:31.509590 systemd[1]: Started sshd@0-145.40.90.207:22-139.178.68.195:44568.service. Feb 13 07:15:31.523479 coreos-metadata[1429]: Feb 13 07:15:31.523 INFO Fetch successful Feb 13 07:15:31.524337 coreos-metadata[1432]: Feb 13 07:15:31.524 INFO Fetch successful Feb 13 07:15:31.546385 unknown[1429]: wrote ssh authorized keys file for user: core Feb 13 07:15:31.546544 systemd[1]: Finished coreos-metadata.service. Feb 13 07:15:31.547270 systemd[1]: Started packet-phone-home.service. Feb 13 07:15:31.552163 curl[1572]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 13 07:15:31.552312 curl[1572]: Dload Upload Total Spent Left Speed Feb 13 07:15:31.553472 sshd[1568]: Accepted publickey for core from 139.178.68.195 port 44568 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:15:31.554154 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:15:31.556239 systemd-logind[1466]: New session 3 of user core. Feb 13 07:15:31.556729 systemd[1]: Started session-3.scope. Feb 13 07:15:31.557791 update-ssh-keys[1573]: Updated "/home/core/.ssh/authorized_keys" Feb 13 07:15:31.557989 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 13 07:15:31.558167 systemd[1]: Reached target multi-user.target. Feb 13 07:15:31.558845 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 13 07:15:31.562717 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 13 07:15:31.562788 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 13 07:15:31.562923 systemd[1]: Startup finished in 1.901s (kernel) + 29.840s (initrd) + 14.891s (userspace) = 46.633s. Feb 13 07:15:31.606437 systemd[1]: Started sshd@1-145.40.90.207:22-139.178.68.195:44578.service. Feb 13 07:15:31.639846 sshd[1578]: Accepted publickey for core from 139.178.68.195 port 44578 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:15:31.640634 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:15:31.642864 systemd-logind[1466]: New session 4 of user core. Feb 13 07:15:31.643368 systemd[1]: Started session-4.scope. Feb 13 07:15:31.694581 sshd[1578]: pam_unix(sshd:session): session closed for user core Feb 13 07:15:31.696110 systemd[1]: sshd@1-145.40.90.207:22-139.178.68.195:44578.service: Deactivated successfully. Feb 13 07:15:31.696436 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 07:15:31.696721 systemd-logind[1466]: Session 4 logged out. Waiting for processes to exit. Feb 13 07:15:31.697236 systemd[1]: Started sshd@2-145.40.90.207:22-139.178.68.195:44592.service. Feb 13 07:15:31.697613 systemd-logind[1466]: Removed session 4. Feb 13 07:15:31.731700 sshd[1584]: Accepted publickey for core from 139.178.68.195 port 44592 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:15:31.732931 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:15:31.736649 curl[1572]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 13 07:15:31.736947 systemd-logind[1466]: New session 5 of user core. Feb 13 07:15:31.737968 systemd[1]: Started session-5.scope. Feb 13 07:15:31.738441 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 13 07:15:31.796658 sshd[1584]: pam_unix(sshd:session): session closed for user core Feb 13 07:15:31.803317 systemd[1]: sshd@2-145.40.90.207:22-139.178.68.195:44592.service: Deactivated successfully. Feb 13 07:15:31.804975 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 07:15:31.806580 systemd-logind[1466]: Session 5 logged out. Waiting for processes to exit. Feb 13 07:15:31.809157 systemd[1]: Started sshd@3-145.40.90.207:22-139.178.68.195:44602.service. Feb 13 07:15:31.811365 systemd-logind[1466]: Removed session 5. Feb 13 07:15:31.874138 sshd[1590]: Accepted publickey for core from 139.178.68.195 port 44602 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:15:31.876384 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:15:31.883374 systemd-logind[1466]: New session 6 of user core. Feb 13 07:15:31.885042 systemd[1]: Started session-6.scope. Feb 13 07:15:31.962106 sshd[1590]: pam_unix(sshd:session): session closed for user core Feb 13 07:15:31.968631 systemd[1]: sshd@3-145.40.90.207:22-139.178.68.195:44602.service: Deactivated successfully. Feb 13 07:15:31.970194 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 07:15:31.971945 systemd-logind[1466]: Session 6 logged out. Waiting for processes to exit. Feb 13 07:15:31.974453 systemd[1]: Started sshd@4-145.40.90.207:22-139.178.68.195:44608.service. Feb 13 07:15:31.976691 systemd-logind[1466]: Removed session 6. Feb 13 07:15:32.046375 sshd[1596]: Accepted publickey for core from 139.178.68.195 port 44608 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:15:32.049461 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:15:32.059105 systemd-logind[1466]: New session 7 of user core. Feb 13 07:15:32.061457 systemd[1]: Started session-7.scope. Feb 13 07:15:32.157121 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 07:15:32.157740 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 13 07:15:36.189687 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 13 07:15:36.193964 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 13 07:15:36.194150 systemd[1]: Reached target network-online.target. Feb 13 07:15:36.194900 systemd[1]: Starting docker.service... Feb 13 07:15:36.230519 env[1620]: time="2024-02-13T07:15:36.230460289Z" level=info msg="Starting up" Feb 13 07:15:36.231060 env[1620]: time="2024-02-13T07:15:36.231048001Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 13 07:15:36.231060 env[1620]: time="2024-02-13T07:15:36.231057612Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 13 07:15:36.231124 env[1620]: time="2024-02-13T07:15:36.231074578Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 13 07:15:36.231124 env[1620]: time="2024-02-13T07:15:36.231081047Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 13 07:15:36.231833 env[1620]: time="2024-02-13T07:15:36.231823019Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 13 07:15:36.231833 env[1620]: time="2024-02-13T07:15:36.231830984Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 13 07:15:36.231885 env[1620]: time="2024-02-13T07:15:36.231839218Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 13 07:15:36.231885 env[1620]: time="2024-02-13T07:15:36.231844495Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 13 07:15:36.246481 env[1620]: time="2024-02-13T07:15:36.246460014Z" level=info msg="Loading containers: start." Feb 13 07:15:36.333451 kernel: Initializing XFRM netlink socket Feb 13 07:15:36.379844 env[1620]: time="2024-02-13T07:15:36.379816287Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 13 07:15:36.380587 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Feb 13 07:15:36.384023 systemd-timesyncd[1421]: Contacted time server [2606:4700:f1::1]:123 (2.flatcar.pool.ntp.org). Feb 13 07:15:36.384074 systemd-timesyncd[1421]: Initial clock synchronization to Tue 2024-02-13 07:15:36.248450 UTC. Feb 13 07:15:36.493945 systemd-networkd[1334]: docker0: Link UP Feb 13 07:15:36.508588 env[1620]: time="2024-02-13T07:15:36.508498735Z" level=info msg="Loading containers: done." Feb 13 07:15:36.524443 env[1620]: time="2024-02-13T07:15:36.524315154Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 07:15:36.524816 env[1620]: time="2024-02-13T07:15:36.524720586Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 13 07:15:36.524980 env[1620]: time="2024-02-13T07:15:36.524945120Z" level=info msg="Daemon has completed initialization" Feb 13 07:15:36.526790 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3989944187-merged.mount: Deactivated successfully. Feb 13 07:15:36.546077 systemd[1]: Started docker.service. Feb 13 07:15:36.562032 env[1620]: time="2024-02-13T07:15:36.561896028Z" level=info msg="API listen on /run/docker.sock" Feb 13 07:15:36.603204 systemd[1]: Reloading. Feb 13 07:15:36.662804 /usr/lib/systemd/system-generators/torcx-generator[1774]: time="2024-02-13T07:15:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 07:15:36.662820 /usr/lib/systemd/system-generators/torcx-generator[1774]: time="2024-02-13T07:15:36Z" level=info msg="torcx already run" Feb 13 07:15:36.714613 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 07:15:36.714621 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 07:15:36.727180 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 07:15:36.779906 systemd[1]: Started kubelet.service. Feb 13 07:15:36.802743 kubelet[1830]: E0213 07:15:36.802716 1830 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 13 07:15:36.803962 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 07:15:36.804031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 07:15:37.477070 env[1478]: time="2024-02-13T07:15:37.477033641Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 13 07:15:38.117142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1115922752.mount: Deactivated successfully. Feb 13 07:15:39.382098 env[1478]: time="2024-02-13T07:15:39.382040847Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:39.382733 env[1478]: time="2024-02-13T07:15:39.382687618Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:39.383981 env[1478]: time="2024-02-13T07:15:39.383946394Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:39.384792 env[1478]: time="2024-02-13T07:15:39.384752281Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:39.385236 env[1478]: time="2024-02-13T07:15:39.385195316Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5\"" Feb 13 07:15:39.392363 env[1478]: time="2024-02-13T07:15:39.392348613Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 13 07:15:41.065317 env[1478]: time="2024-02-13T07:15:41.065239672Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:41.066465 env[1478]: time="2024-02-13T07:15:41.066407820Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:41.068654 env[1478]: time="2024-02-13T07:15:41.068592960Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:41.070678 env[1478]: time="2024-02-13T07:15:41.070624545Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:41.071754 env[1478]: time="2024-02-13T07:15:41.071694495Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0\"" Feb 13 07:15:41.085324 env[1478]: time="2024-02-13T07:15:41.085282176Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 13 07:15:42.136465 env[1478]: time="2024-02-13T07:15:42.136415093Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:42.137138 env[1478]: time="2024-02-13T07:15:42.137093759Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:42.138148 env[1478]: time="2024-02-13T07:15:42.138102846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:42.139469 env[1478]: time="2024-02-13T07:15:42.139421422Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:42.139931 env[1478]: time="2024-02-13T07:15:42.139880808Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975\"" Feb 13 07:15:42.145942 env[1478]: time="2024-02-13T07:15:42.145922295Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 13 07:15:42.963220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3986549085.mount: Deactivated successfully. Feb 13 07:15:43.290694 env[1478]: time="2024-02-13T07:15:43.290668945Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:43.291361 env[1478]: time="2024-02-13T07:15:43.291348824Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:43.292019 env[1478]: time="2024-02-13T07:15:43.292008558Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:43.293002 env[1478]: time="2024-02-13T07:15:43.292989381Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:43.293252 env[1478]: time="2024-02-13T07:15:43.293239489Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 13 07:15:43.299518 env[1478]: time="2024-02-13T07:15:43.299501200Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 07:15:43.838942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2803328817.mount: Deactivated successfully. Feb 13 07:15:43.840514 env[1478]: time="2024-02-13T07:15:43.840461509Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:43.841159 env[1478]: time="2024-02-13T07:15:43.841098181Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:43.842029 env[1478]: time="2024-02-13T07:15:43.841996571Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:43.842705 env[1478]: time="2024-02-13T07:15:43.842647538Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:43.843067 env[1478]: time="2024-02-13T07:15:43.843016017Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 07:15:43.849184 env[1478]: time="2024-02-13T07:15:43.849150999Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 13 07:15:44.528227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1330988551.mount: Deactivated successfully. Feb 13 07:15:47.024276 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 07:15:47.024479 systemd[1]: Stopped kubelet.service. Feb 13 07:15:47.025421 systemd[1]: Started kubelet.service. Feb 13 07:15:47.049109 kubelet[1929]: E0213 07:15:47.049035 1929 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 13 07:15:47.051086 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 07:15:47.051155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 07:15:47.345071 env[1478]: time="2024-02-13T07:15:47.344988582Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:47.345717 env[1478]: time="2024-02-13T07:15:47.345705547Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:47.346690 env[1478]: time="2024-02-13T07:15:47.346680166Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:47.347605 env[1478]: time="2024-02-13T07:15:47.347561539Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:47.348026 env[1478]: time="2024-02-13T07:15:47.347973010Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" Feb 13 07:15:47.353275 env[1478]: time="2024-02-13T07:15:47.353260920Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 13 07:15:47.881643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1347468652.mount: Deactivated successfully. Feb 13 07:15:48.347341 env[1478]: time="2024-02-13T07:15:48.347283495Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:48.347968 env[1478]: time="2024-02-13T07:15:48.347929445Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:48.348713 env[1478]: time="2024-02-13T07:15:48.348679413Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:48.349404 env[1478]: time="2024-02-13T07:15:48.349363228Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:48.349820 env[1478]: time="2024-02-13T07:15:48.349761412Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 13 07:15:49.953106 systemd[1]: Stopped kubelet.service. Feb 13 07:15:49.964600 systemd[1]: Reloading. Feb 13 07:15:49.996839 /usr/lib/systemd/system-generators/torcx-generator[2096]: time="2024-02-13T07:15:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 07:15:49.996871 /usr/lib/systemd/system-generators/torcx-generator[2096]: time="2024-02-13T07:15:49Z" level=info msg="torcx already run" Feb 13 07:15:50.048051 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 07:15:50.048058 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 07:15:50.060221 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 07:15:50.114765 systemd[1]: Started kubelet.service. Feb 13 07:15:50.136833 kubelet[2156]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 07:15:50.136833 kubelet[2156]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 07:15:50.136833 kubelet[2156]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 07:15:50.137055 kubelet[2156]: I0213 07:15:50.136829 2156 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 07:15:50.368093 kubelet[2156]: I0213 07:15:50.368062 2156 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 13 07:15:50.368093 kubelet[2156]: I0213 07:15:50.368089 2156 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 07:15:50.368225 kubelet[2156]: I0213 07:15:50.368198 2156 server.go:837] "Client rotation is on, will bootstrap in background" Feb 13 07:15:50.370771 kubelet[2156]: I0213 07:15:50.370737 2156 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 07:15:50.371310 kubelet[2156]: E0213 07:15:50.371275 2156 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://145.40.90.207:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 07:15:50.388554 kubelet[2156]: I0213 07:15:50.388519 2156 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 07:15:50.388691 kubelet[2156]: I0213 07:15:50.388653 2156 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 07:15:50.388691 kubelet[2156]: I0213 07:15:50.388688 2156 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 13 07:15:50.388769 kubelet[2156]: I0213 07:15:50.388696 2156 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 13 07:15:50.388769 kubelet[2156]: I0213 07:15:50.388702 2156 container_manager_linux.go:302] "Creating device plugin manager" Feb 13 07:15:50.388769 kubelet[2156]: I0213 07:15:50.388741 2156 state_mem.go:36] "Initialized new in-memory state store" Feb 13 07:15:50.390357 kubelet[2156]: I0213 07:15:50.390349 2156 kubelet.go:405] "Attempting to sync node with API server" Feb 13 07:15:50.390430 kubelet[2156]: I0213 07:15:50.390378 2156 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 07:15:50.390430 kubelet[2156]: I0213 07:15:50.390394 2156 kubelet.go:309] "Adding apiserver pod source" Feb 13 07:15:50.390430 kubelet[2156]: I0213 07:15:50.390422 2156 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 07:15:50.390696 kubelet[2156]: W0213 07:15:50.390677 2156 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://145.40.90.207:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-fe1fbff781&limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 07:15:50.390696 kubelet[2156]: I0213 07:15:50.390687 2156 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 13 07:15:50.390762 kubelet[2156]: E0213 07:15:50.390702 2156 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://145.40.90.207:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-fe1fbff781&limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 07:15:50.390762 kubelet[2156]: W0213 07:15:50.390699 2156 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://145.40.90.207:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 07:15:50.390762 kubelet[2156]: E0213 07:15:50.390721 2156 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://145.40.90.207:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 07:15:50.391052 kubelet[2156]: W0213 07:15:50.390857 2156 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 07:15:50.391194 kubelet[2156]: I0213 07:15:50.391169 2156 server.go:1168] "Started kubelet" Feb 13 07:15:50.391247 kubelet[2156]: I0213 07:15:50.391240 2156 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 07:15:50.391267 kubelet[2156]: I0213 07:15:50.391250 2156 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 13 07:15:50.391402 kubelet[2156]: E0213 07:15:50.391322 2156 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-fe1fbff781.17b35ad406709b21", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-fe1fbff781", UID:"ci-3510.3.2-a-fe1fbff781", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-fe1fbff781"}, FirstTimestamp:time.Date(2024, time.February, 13, 7, 15, 50, 391159585, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 7, 15, 50, 391159585, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://145.40.90.207:6443/api/v1/namespaces/default/events": dial tcp 145.40.90.207:6443: connect: connection refused'(may retry after sleeping) Feb 13 07:15:50.391460 kubelet[2156]: E0213 07:15:50.391416 2156 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 13 07:15:50.391460 kubelet[2156]: E0213 07:15:50.391425 2156 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 07:15:50.401517 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 13 07:15:50.401630 kubelet[2156]: I0213 07:15:50.401563 2156 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 07:15:50.401915 kubelet[2156]: I0213 07:15:50.401877 2156 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 13 07:15:50.401947 kubelet[2156]: I0213 07:15:50.401920 2156 server.go:461] "Adding debug handlers to kubelet server" Feb 13 07:15:50.401974 kubelet[2156]: I0213 07:15:50.401964 2156 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 13 07:15:50.402086 kubelet[2156]: E0213 07:15:50.402079 2156 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-fe1fbff781?timeout=10s\": dial tcp 145.40.90.207:6443: connect: connection refused" interval="200ms" Feb 13 07:15:50.402178 kubelet[2156]: W0213 07:15:50.402153 2156 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://145.40.90.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 07:15:50.402212 kubelet[2156]: E0213 07:15:50.402188 2156 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://145.40.90.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 07:15:50.408802 kubelet[2156]: I0213 07:15:50.408787 2156 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 13 07:15:50.409259 kubelet[2156]: I0213 07:15:50.409250 2156 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 13 07:15:50.409306 kubelet[2156]: I0213 07:15:50.409269 2156 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 13 07:15:50.409306 kubelet[2156]: I0213 07:15:50.409282 2156 kubelet.go:2257] "Starting kubelet main sync loop" Feb 13 07:15:50.409342 kubelet[2156]: E0213 07:15:50.409321 2156 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 07:15:50.409531 kubelet[2156]: W0213 07:15:50.409516 2156 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://145.40.90.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 07:15:50.409568 kubelet[2156]: E0213 07:15:50.409540 2156 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://145.40.90.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 07:15:50.424489 kubelet[2156]: I0213 07:15:50.424479 2156 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 07:15:50.424489 kubelet[2156]: I0213 07:15:50.424487 2156 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 07:15:50.424547 kubelet[2156]: I0213 07:15:50.424496 2156 state_mem.go:36] "Initialized new in-memory state store" Feb 13 07:15:50.425189 kubelet[2156]: I0213 07:15:50.425183 2156 policy_none.go:49] "None policy: Start" Feb 13 07:15:50.425385 kubelet[2156]: I0213 07:15:50.425379 2156 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 13 07:15:50.425385 kubelet[2156]: I0213 07:15:50.425395 2156 state_mem.go:35] "Initializing new in-memory state store" Feb 13 07:15:50.427529 systemd[1]: Created slice kubepods.slice. Feb 13 07:15:50.429446 systemd[1]: Created slice kubepods-burstable.slice. Feb 13 07:15:50.430739 systemd[1]: Created slice kubepods-besteffort.slice. Feb 13 07:15:50.456178 kubelet[2156]: I0213 07:15:50.456128 2156 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 07:15:50.456357 kubelet[2156]: I0213 07:15:50.456344 2156 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 07:15:50.456711 kubelet[2156]: E0213 07:15:50.456673 2156 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-fe1fbff781\" not found" Feb 13 07:15:50.506510 kubelet[2156]: I0213 07:15:50.506428 2156 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:50.507156 kubelet[2156]: E0213 07:15:50.507079 2156 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://145.40.90.207:6443/api/v1/nodes\": dial tcp 145.40.90.207:6443: connect: connection refused" node="ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:50.510404 kubelet[2156]: I0213 07:15:50.510317 2156 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:15:50.516110 kubelet[2156]: I0213 07:15:50.516068 2156 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:15:50.519730 kubelet[2156]: I0213 07:15:50.519681 2156 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:15:50.532355 systemd[1]: Created slice kubepods-burstable-podb63898a1884a20f25c13460d9ac17a8d.slice. Feb 13 07:15:50.569973 systemd[1]: Created slice kubepods-burstable-pod2f74967983520423babbb241c6561a50.slice. Feb 13 07:15:50.579181 systemd[1]: Created slice kubepods-burstable-podcda7301c21ba10e8852d303b70437bfc.slice. Feb 13 07:15:50.603001 kubelet[2156]: I0213 07:15:50.602934 2156 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b63898a1884a20f25c13460d9ac17a8d-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-fe1fbff781\" (UID: \"b63898a1884a20f25c13460d9ac17a8d\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:50.603285 kubelet[2156]: E0213 07:15:50.603180 2156 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-fe1fbff781?timeout=10s\": dial tcp 145.40.90.207:6443: connect: connection refused" interval="400ms" Feb 13 07:15:50.704300 kubelet[2156]: I0213 07:15:50.704072 2156 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f74967983520423babbb241c6561a50-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-fe1fbff781\" (UID: \"2f74967983520423babbb241c6561a50\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:50.704300 kubelet[2156]: I0213 07:15:50.704279 2156 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f74967983520423babbb241c6561a50-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-fe1fbff781\" (UID: \"2f74967983520423babbb241c6561a50\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:50.704804 kubelet[2156]: I0213 07:15:50.704428 2156 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cda7301c21ba10e8852d303b70437bfc-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-fe1fbff781\" (UID: \"cda7301c21ba10e8852d303b70437bfc\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:50.704804 kubelet[2156]: I0213 07:15:50.704546 2156 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2f74967983520423babbb241c6561a50-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-fe1fbff781\" (UID: \"2f74967983520423babbb241c6561a50\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:50.704804 kubelet[2156]: I0213 07:15:50.704665 2156 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f74967983520423babbb241c6561a50-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-fe1fbff781\" (UID: \"2f74967983520423babbb241c6561a50\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:50.704804 kubelet[2156]: I0213 07:15:50.704766 2156 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f74967983520423babbb241c6561a50-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-fe1fbff781\" (UID: \"2f74967983520423babbb241c6561a50\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:50.705325 kubelet[2156]: I0213 07:15:50.704985 2156 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b63898a1884a20f25c13460d9ac17a8d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-fe1fbff781\" (UID: \"b63898a1884a20f25c13460d9ac17a8d\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:50.705325 kubelet[2156]: I0213 07:15:50.705114 2156 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b63898a1884a20f25c13460d9ac17a8d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-fe1fbff781\" (UID: \"b63898a1884a20f25c13460d9ac17a8d\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:50.711987 kubelet[2156]: I0213 07:15:50.711942 2156 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:50.712654 kubelet[2156]: E0213 07:15:50.712586 2156 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://145.40.90.207:6443/api/v1/nodes\": dial tcp 145.40.90.207:6443: connect: connection refused" node="ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:50.865447 env[1478]: time="2024-02-13T07:15:50.865307640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-fe1fbff781,Uid:b63898a1884a20f25c13460d9ac17a8d,Namespace:kube-system,Attempt:0,}" Feb 13 07:15:50.875593 env[1478]: time="2024-02-13T07:15:50.875473557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-fe1fbff781,Uid:2f74967983520423babbb241c6561a50,Namespace:kube-system,Attempt:0,}" Feb 13 07:15:50.884582 env[1478]: time="2024-02-13T07:15:50.884457092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-fe1fbff781,Uid:cda7301c21ba10e8852d303b70437bfc,Namespace:kube-system,Attempt:0,}" Feb 13 07:15:51.004709 kubelet[2156]: E0213 07:15:51.004515 2156 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-fe1fbff781?timeout=10s\": dial tcp 145.40.90.207:6443: connect: connection refused" interval="800ms" Feb 13 07:15:51.117564 kubelet[2156]: I0213 07:15:51.117466 2156 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:51.118227 kubelet[2156]: E0213 07:15:51.118162 2156 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://145.40.90.207:6443/api/v1/nodes\": dial tcp 145.40.90.207:6443: connect: connection refused" node="ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:51.240195 kubelet[2156]: W0213 07:15:51.240056 2156 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://145.40.90.207:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 07:15:51.240195 kubelet[2156]: E0213 07:15:51.240178 2156 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://145.40.90.207:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 07:15:51.270811 kubelet[2156]: W0213 07:15:51.270609 2156 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://145.40.90.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 07:15:51.270811 kubelet[2156]: E0213 07:15:51.270713 2156 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://145.40.90.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 07:15:51.287675 kubelet[2156]: W0213 07:15:51.287496 2156 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://145.40.90.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 07:15:51.287675 kubelet[2156]: E0213 07:15:51.287646 2156 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://145.40.90.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.207:6443: connect: connection refused Feb 13 07:15:51.386747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056166600.mount: Deactivated successfully. Feb 13 07:15:51.387775 env[1478]: time="2024-02-13T07:15:51.387754382Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:51.388573 env[1478]: time="2024-02-13T07:15:51.388559228Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:51.389117 env[1478]: time="2024-02-13T07:15:51.389107030Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:51.389713 env[1478]: time="2024-02-13T07:15:51.389702752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:51.390173 env[1478]: time="2024-02-13T07:15:51.390146465Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:51.391297 env[1478]: time="2024-02-13T07:15:51.391285991Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:51.391706 env[1478]: time="2024-02-13T07:15:51.391697125Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:51.393466 env[1478]: time="2024-02-13T07:15:51.393439621Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:51.394065 env[1478]: time="2024-02-13T07:15:51.393874055Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:51.395989 env[1478]: time="2024-02-13T07:15:51.395941112Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:51.396811 env[1478]: time="2024-02-13T07:15:51.396772272Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:51.397268 env[1478]: time="2024-02-13T07:15:51.397228330Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:15:51.401321 env[1478]: time="2024-02-13T07:15:51.401290786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:15:51.401321 env[1478]: time="2024-02-13T07:15:51.401315240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:15:51.401405 env[1478]: time="2024-02-13T07:15:51.401322264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:15:51.401434 env[1478]: time="2024-02-13T07:15:51.401420344Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ec90899e7134186c19d44786ec39c1656dc3bb927fc41601b588243786d7fe2 pid=2206 runtime=io.containerd.runc.v2 Feb 13 07:15:51.405328 env[1478]: time="2024-02-13T07:15:51.405286019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:15:51.405328 env[1478]: time="2024-02-13T07:15:51.405312241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:15:51.405328 env[1478]: time="2024-02-13T07:15:51.405319396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:15:51.405482 env[1478]: time="2024-02-13T07:15:51.405339218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:15:51.405482 env[1478]: time="2024-02-13T07:15:51.405355581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:15:51.405482 env[1478]: time="2024-02-13T07:15:51.405363548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:15:51.405482 env[1478]: time="2024-02-13T07:15:51.405398284Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/585a4e1c510929d42f63c70f54aa270abc774a6ca8835e18ab86c2fdba421bef pid=2236 runtime=io.containerd.runc.v2 Feb 13 07:15:51.405482 env[1478]: time="2024-02-13T07:15:51.405433559Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/529b248169080381a3e29685c84be6e8c3ea0f941eb5e2e39b2db602ea2fe54b pid=2237 runtime=io.containerd.runc.v2 Feb 13 07:15:51.408247 systemd[1]: Started cri-containerd-9ec90899e7134186c19d44786ec39c1656dc3bb927fc41601b588243786d7fe2.scope. Feb 13 07:15:51.412972 systemd[1]: Started cri-containerd-529b248169080381a3e29685c84be6e8c3ea0f941eb5e2e39b2db602ea2fe54b.scope. Feb 13 07:15:51.413767 systemd[1]: Started cri-containerd-585a4e1c510929d42f63c70f54aa270abc774a6ca8835e18ab86c2fdba421bef.scope. Feb 13 07:15:51.434465 env[1478]: time="2024-02-13T07:15:51.434424945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-fe1fbff781,Uid:b63898a1884a20f25c13460d9ac17a8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ec90899e7134186c19d44786ec39c1656dc3bb927fc41601b588243786d7fe2\"" Feb 13 07:15:51.436286 env[1478]: time="2024-02-13T07:15:51.436259853Z" level=info msg="CreateContainer within sandbox \"9ec90899e7134186c19d44786ec39c1656dc3bb927fc41601b588243786d7fe2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 07:15:51.437231 env[1478]: time="2024-02-13T07:15:51.437213564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-fe1fbff781,Uid:2f74967983520423babbb241c6561a50,Namespace:kube-system,Attempt:0,} returns sandbox id \"529b248169080381a3e29685c84be6e8c3ea0f941eb5e2e39b2db602ea2fe54b\"" Feb 13 07:15:51.437837 env[1478]: time="2024-02-13T07:15:51.437819493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-fe1fbff781,Uid:cda7301c21ba10e8852d303b70437bfc,Namespace:kube-system,Attempt:0,} returns sandbox id \"585a4e1c510929d42f63c70f54aa270abc774a6ca8835e18ab86c2fdba421bef\"" Feb 13 07:15:51.438314 env[1478]: time="2024-02-13T07:15:51.438299388Z" level=info msg="CreateContainer within sandbox \"529b248169080381a3e29685c84be6e8c3ea0f941eb5e2e39b2db602ea2fe54b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 07:15:51.439210 env[1478]: time="2024-02-13T07:15:51.439195727Z" level=info msg="CreateContainer within sandbox \"585a4e1c510929d42f63c70f54aa270abc774a6ca8835e18ab86c2fdba421bef\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 07:15:51.442392 env[1478]: time="2024-02-13T07:15:51.442350563Z" level=info msg="CreateContainer within sandbox \"9ec90899e7134186c19d44786ec39c1656dc3bb927fc41601b588243786d7fe2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bc7b165edb5d3691b65cea897fbf38431d6c02d5d4cecabd7a0b536b6cde77da\"" Feb 13 07:15:51.442594 env[1478]: time="2024-02-13T07:15:51.442580821Z" level=info msg="StartContainer for \"bc7b165edb5d3691b65cea897fbf38431d6c02d5d4cecabd7a0b536b6cde77da\"" Feb 13 07:15:51.444470 env[1478]: time="2024-02-13T07:15:51.444455648Z" level=info msg="CreateContainer within sandbox \"585a4e1c510929d42f63c70f54aa270abc774a6ca8835e18ab86c2fdba421bef\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"efab614e7925fe853e1a9dfa24169a7aec634b3a8d749a93763ffb72bcd756ba\"" Feb 13 07:15:51.444663 env[1478]: time="2024-02-13T07:15:51.444649905Z" level=info msg="StartContainer for \"efab614e7925fe853e1a9dfa24169a7aec634b3a8d749a93763ffb72bcd756ba\"" Feb 13 07:15:51.445658 env[1478]: time="2024-02-13T07:15:51.445642167Z" level=info msg="CreateContainer within sandbox \"529b248169080381a3e29685c84be6e8c3ea0f941eb5e2e39b2db602ea2fe54b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e7eebe75c2c1801b8396e8501413d2438c4082de3ba413e8e778e997e6d8ed10\"" Feb 13 07:15:51.445862 env[1478]: time="2024-02-13T07:15:51.445844988Z" level=info msg="StartContainer for \"e7eebe75c2c1801b8396e8501413d2438c4082de3ba413e8e778e997e6d8ed10\"" Feb 13 07:15:51.450994 systemd[1]: Started cri-containerd-bc7b165edb5d3691b65cea897fbf38431d6c02d5d4cecabd7a0b536b6cde77da.scope. Feb 13 07:15:51.452631 systemd[1]: Started cri-containerd-efab614e7925fe853e1a9dfa24169a7aec634b3a8d749a93763ffb72bcd756ba.scope. Feb 13 07:15:51.454732 systemd[1]: Started cri-containerd-e7eebe75c2c1801b8396e8501413d2438c4082de3ba413e8e778e997e6d8ed10.scope. Feb 13 07:15:51.479634 env[1478]: time="2024-02-13T07:15:51.479605723Z" level=info msg="StartContainer for \"efab614e7925fe853e1a9dfa24169a7aec634b3a8d749a93763ffb72bcd756ba\" returns successfully" Feb 13 07:15:51.479773 env[1478]: time="2024-02-13T07:15:51.479734561Z" level=info msg="StartContainer for \"e7eebe75c2c1801b8396e8501413d2438c4082de3ba413e8e778e997e6d8ed10\" returns successfully" Feb 13 07:15:51.479773 env[1478]: time="2024-02-13T07:15:51.479737751Z" level=info msg="StartContainer for \"bc7b165edb5d3691b65cea897fbf38431d6c02d5d4cecabd7a0b536b6cde77da\" returns successfully" Feb 13 07:15:51.920082 kubelet[2156]: I0213 07:15:51.920037 2156 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:52.196257 kubelet[2156]: I0213 07:15:52.196163 2156 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:52.249904 kubelet[2156]: E0213 07:15:52.249853 2156 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="1.6s" Feb 13 07:15:52.391351 kubelet[2156]: I0213 07:15:52.391234 2156 apiserver.go:52] "Watching apiserver" Feb 13 07:15:52.402180 kubelet[2156]: I0213 07:15:52.402132 2156 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 13 07:15:52.414735 kubelet[2156]: I0213 07:15:52.414684 2156 reconciler.go:41] "Reconciler: start to sync state" Feb 13 07:15:52.427100 kubelet[2156]: E0213 07:15:52.427005 2156 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-fe1fbff781\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:52.427100 kubelet[2156]: E0213 07:15:52.427007 2156 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-fe1fbff781\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:52.427380 kubelet[2156]: E0213 07:15:52.427127 2156 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-fe1fbff781\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:53.431618 kubelet[2156]: W0213 07:15:53.431573 2156 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 07:15:55.245817 systemd[1]: Reloading. Feb 13 07:15:55.277160 /usr/lib/systemd/system-generators/torcx-generator[2497]: time="2024-02-13T07:15:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 07:15:55.277188 /usr/lib/systemd/system-generators/torcx-generator[2497]: time="2024-02-13T07:15:55Z" level=info msg="torcx already run" Feb 13 07:15:55.335489 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 07:15:55.335497 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 07:15:55.349754 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 07:15:55.414241 kubelet[2156]: I0213 07:15:55.414170 2156 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 07:15:55.414199 systemd[1]: Stopping kubelet.service... Feb 13 07:15:55.433789 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 07:15:55.433891 systemd[1]: Stopped kubelet.service. Feb 13 07:15:55.434759 systemd[1]: Started kubelet.service. Feb 13 07:15:55.457964 kubelet[2556]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 07:15:55.457964 kubelet[2556]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 07:15:55.457964 kubelet[2556]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 07:15:55.458182 kubelet[2556]: I0213 07:15:55.458001 2556 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 07:15:55.460337 kubelet[2556]: I0213 07:15:55.460327 2556 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 13 07:15:55.460337 kubelet[2556]: I0213 07:15:55.460337 2556 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 07:15:55.460439 kubelet[2556]: I0213 07:15:55.460434 2556 server.go:837] "Client rotation is on, will bootstrap in background" Feb 13 07:15:55.461282 kubelet[2556]: I0213 07:15:55.461276 2556 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 07:15:55.461915 kubelet[2556]: I0213 07:15:55.461868 2556 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 07:15:55.480516 kubelet[2556]: I0213 07:15:55.480506 2556 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 07:15:55.480662 kubelet[2556]: I0213 07:15:55.480622 2556 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 07:15:55.480662 kubelet[2556]: I0213 07:15:55.480660 2556 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 13 07:15:55.480761 kubelet[2556]: I0213 07:15:55.480670 2556 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 13 07:15:55.480761 kubelet[2556]: I0213 07:15:55.480679 2556 container_manager_linux.go:302] "Creating device plugin manager" Feb 13 07:15:55.480761 kubelet[2556]: I0213 07:15:55.480698 2556 state_mem.go:36] "Initialized new in-memory state store" Feb 13 07:15:55.482435 kubelet[2556]: I0213 07:15:55.482413 2556 kubelet.go:405] "Attempting to sync node with API server" Feb 13 07:15:55.482488 kubelet[2556]: I0213 07:15:55.482450 2556 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 07:15:55.482488 kubelet[2556]: I0213 07:15:55.482473 2556 kubelet.go:309] "Adding apiserver pod source" Feb 13 07:15:55.482550 kubelet[2556]: I0213 07:15:55.482491 2556 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 07:15:55.483177 kubelet[2556]: I0213 07:15:55.483160 2556 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 13 07:15:55.483821 kubelet[2556]: I0213 07:15:55.483793 2556 server.go:1168] "Started kubelet" Feb 13 07:15:55.483880 kubelet[2556]: I0213 07:15:55.483862 2556 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 07:15:55.483917 kubelet[2556]: I0213 07:15:55.483888 2556 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 13 07:15:55.484139 kubelet[2556]: E0213 07:15:55.484124 2556 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 13 07:15:55.484187 kubelet[2556]: E0213 07:15:55.484144 2556 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 07:15:55.484494 kubelet[2556]: I0213 07:15:55.484486 2556 server.go:461] "Adding debug handlers to kubelet server" Feb 13 07:15:55.484696 kubelet[2556]: I0213 07:15:55.484685 2556 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 07:15:55.484766 kubelet[2556]: I0213 07:15:55.484748 2556 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 13 07:15:55.484766 kubelet[2556]: E0213 07:15:55.484762 2556 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-fe1fbff781\" not found" Feb 13 07:15:55.484852 kubelet[2556]: I0213 07:15:55.484779 2556 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 13 07:15:55.489671 kubelet[2556]: I0213 07:15:55.489617 2556 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 13 07:15:55.490245 kubelet[2556]: I0213 07:15:55.490229 2556 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 13 07:15:55.490306 kubelet[2556]: I0213 07:15:55.490254 2556 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 13 07:15:55.490306 kubelet[2556]: I0213 07:15:55.490284 2556 kubelet.go:2257] "Starting kubelet main sync loop" Feb 13 07:15:55.490374 kubelet[2556]: E0213 07:15:55.490329 2556 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 07:15:55.507457 kubelet[2556]: I0213 07:15:55.507406 2556 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 07:15:55.507457 kubelet[2556]: I0213 07:15:55.507419 2556 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 07:15:55.507457 kubelet[2556]: I0213 07:15:55.507429 2556 state_mem.go:36] "Initialized new in-memory state store" Feb 13 07:15:55.507571 kubelet[2556]: I0213 07:15:55.507525 2556 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 07:15:55.507571 kubelet[2556]: I0213 07:15:55.507534 2556 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 13 07:15:55.507571 kubelet[2556]: I0213 07:15:55.507538 2556 policy_none.go:49] "None policy: Start" Feb 13 07:15:55.507921 kubelet[2556]: I0213 07:15:55.507877 2556 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 13 07:15:55.507921 kubelet[2556]: I0213 07:15:55.507889 2556 state_mem.go:35] "Initializing new in-memory state store" Feb 13 07:15:55.507983 kubelet[2556]: I0213 07:15:55.507971 2556 state_mem.go:75] "Updated machine memory state" Feb 13 07:15:55.509932 kubelet[2556]: I0213 07:15:55.509887 2556 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 07:15:55.510054 kubelet[2556]: I0213 07:15:55.510009 2556 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 07:15:55.588066 kubelet[2556]: I0213 07:15:55.588040 2556 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:55.590565 kubelet[2556]: I0213 07:15:55.590538 2556 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:15:55.590657 kubelet[2556]: I0213 07:15:55.590645 2556 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:15:55.590739 kubelet[2556]: I0213 07:15:55.590714 2556 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:15:55.596966 kubelet[2556]: W0213 07:15:55.596915 2556 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 07:15:55.596966 kubelet[2556]: W0213 07:15:55.596964 2556 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 07:15:55.597705 kubelet[2556]: W0213 07:15:55.597665 2556 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 07:15:55.597901 kubelet[2556]: E0213 07:15:55.597840 2556 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-fe1fbff781\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:55.601994 kubelet[2556]: I0213 07:15:55.601912 2556 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:55.602179 kubelet[2556]: I0213 07:15:55.602066 2556 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:55.639173 sudo[2599]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 07:15:55.640048 sudo[2599]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 13 07:15:55.786228 kubelet[2556]: I0213 07:15:55.786215 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b63898a1884a20f25c13460d9ac17a8d-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-fe1fbff781\" (UID: \"b63898a1884a20f25c13460d9ac17a8d\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:55.786305 kubelet[2556]: I0213 07:15:55.786238 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b63898a1884a20f25c13460d9ac17a8d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-fe1fbff781\" (UID: \"b63898a1884a20f25c13460d9ac17a8d\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:55.786305 kubelet[2556]: I0213 07:15:55.786254 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b63898a1884a20f25c13460d9ac17a8d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-fe1fbff781\" (UID: \"b63898a1884a20f25c13460d9ac17a8d\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:55.786305 kubelet[2556]: I0213 07:15:55.786266 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f74967983520423babbb241c6561a50-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-fe1fbff781\" (UID: \"2f74967983520423babbb241c6561a50\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:55.786305 kubelet[2556]: I0213 07:15:55.786283 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f74967983520423babbb241c6561a50-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-fe1fbff781\" (UID: \"2f74967983520423babbb241c6561a50\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:55.786305 kubelet[2556]: I0213 07:15:55.786297 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f74967983520423babbb241c6561a50-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-fe1fbff781\" (UID: \"2f74967983520423babbb241c6561a50\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:55.786405 kubelet[2556]: I0213 07:15:55.786337 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2f74967983520423babbb241c6561a50-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-fe1fbff781\" (UID: \"2f74967983520423babbb241c6561a50\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:55.786405 kubelet[2556]: I0213 07:15:55.786379 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f74967983520423babbb241c6561a50-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-fe1fbff781\" (UID: \"2f74967983520423babbb241c6561a50\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:55.786440 kubelet[2556]: I0213 07:15:55.786412 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cda7301c21ba10e8852d303b70437bfc-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-fe1fbff781\" (UID: \"cda7301c21ba10e8852d303b70437bfc\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:56.012364 sudo[2599]: pam_unix(sudo:session): session closed for user root Feb 13 07:15:56.483840 kubelet[2556]: I0213 07:15:56.483715 2556 apiserver.go:52] "Watching apiserver" Feb 13 07:15:56.507102 kubelet[2556]: W0213 07:15:56.507044 2556 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 07:15:56.507102 kubelet[2556]: W0213 07:15:56.507089 2556 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 07:15:56.507439 kubelet[2556]: E0213 07:15:56.507230 2556 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-fe1fbff781\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:56.507439 kubelet[2556]: E0213 07:15:56.507225 2556 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-fe1fbff781\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-fe1fbff781" Feb 13 07:15:56.550203 kubelet[2556]: I0213 07:15:56.550150 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-fe1fbff781" podStartSLOduration=1.550106563 podCreationTimestamp="2024-02-13 07:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 07:15:56.550028113 +0000 UTC m=+1.113553843" watchObservedRunningTime="2024-02-13 07:15:56.550106563 +0000 UTC m=+1.113632286" Feb 13 07:15:56.550353 kubelet[2556]: I0213 07:15:56.550230 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-fe1fbff781" podStartSLOduration=1.550203878 podCreationTimestamp="2024-02-13 07:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 07:15:56.542086926 +0000 UTC m=+1.105612668" watchObservedRunningTime="2024-02-13 07:15:56.550203878 +0000 UTC m=+1.113729600" Feb 13 07:15:56.563985 kubelet[2556]: I0213 07:15:56.563959 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-fe1fbff781" podStartSLOduration=3.563922694 podCreationTimestamp="2024-02-13 07:15:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 07:15:56.556699174 +0000 UTC m=+1.120224906" watchObservedRunningTime="2024-02-13 07:15:56.563922694 +0000 UTC m=+1.127448416" Feb 13 07:15:56.586639 kubelet[2556]: I0213 07:15:56.586539 2556 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 13 07:15:56.591442 kubelet[2556]: I0213 07:15:56.591333 2556 reconciler.go:41] "Reconciler: start to sync state" Feb 13 07:15:56.855225 sudo[1600]: pam_unix(sudo:session): session closed for user root Feb 13 07:15:56.856010 sshd[1596]: pam_unix(sshd:session): session closed for user core Feb 13 07:15:56.857485 systemd[1]: sshd@4-145.40.90.207:22-139.178.68.195:44608.service: Deactivated successfully. Feb 13 07:15:56.857946 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 07:15:56.858037 systemd[1]: session-7.scope: Consumed 2.802s CPU time. Feb 13 07:15:56.858341 systemd-logind[1466]: Session 7 logged out. Waiting for processes to exit. Feb 13 07:15:56.858930 systemd-logind[1466]: Removed session 7. Feb 13 07:16:08.942137 update_engine[1468]: I0213 07:16:08.942027 1468 update_attempter.cc:509] Updating boot flags... Feb 13 07:16:10.015000 kubelet[2556]: I0213 07:16:10.014954 2556 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:16:10.015662 kubelet[2556]: I0213 07:16:10.015653 2556 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:16:10.018608 systemd[1]: Created slice kubepods-besteffort-pod70dcfd5e_85fc_4afa_9db5_2bc0b2719268.slice. Feb 13 07:16:10.039550 systemd[1]: Created slice kubepods-burstable-podf4bcaa9a_7aca_4cd2_b879_af34c9ad2c55.slice. Feb 13 07:16:10.074601 kubelet[2556]: I0213 07:16:10.074584 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-cni-path\") pod \"cilium-bp5sw\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " pod="kube-system/cilium-bp5sw" Feb 13 07:16:10.074601 kubelet[2556]: I0213 07:16:10.074604 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-hubble-tls\") pod \"cilium-bp5sw\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " pod="kube-system/cilium-bp5sw" Feb 13 07:16:10.074720 kubelet[2556]: I0213 07:16:10.074618 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m25j\" (UniqueName: \"kubernetes.io/projected/70dcfd5e-85fc-4afa-9db5-2bc0b2719268-kube-api-access-6m25j\") pod \"kube-proxy-4mf6x\" (UID: \"70dcfd5e-85fc-4afa-9db5-2bc0b2719268\") " pod="kube-system/kube-proxy-4mf6x" Feb 13 07:16:10.074720 kubelet[2556]: I0213 07:16:10.074630 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-hostproc\") pod \"cilium-bp5sw\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " pod="kube-system/cilium-bp5sw" Feb 13 07:16:10.074720 kubelet[2556]: I0213 07:16:10.074647 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-host-proc-sys-net\") pod \"cilium-bp5sw\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " pod="kube-system/cilium-bp5sw" Feb 13 07:16:10.074720 kubelet[2556]: I0213 07:16:10.074680 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-cilium-run\") pod \"cilium-bp5sw\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " pod="kube-system/cilium-bp5sw" Feb 13 07:16:10.074720 kubelet[2556]: I0213 07:16:10.074719 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpxl5\" (UniqueName: \"kubernetes.io/projected/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-kube-api-access-vpxl5\") pod \"cilium-bp5sw\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " pod="kube-system/cilium-bp5sw" Feb 13 07:16:10.074817 kubelet[2556]: I0213 07:16:10.074744 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-cilium-cgroup\") pod \"cilium-bp5sw\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " pod="kube-system/cilium-bp5sw" Feb 13 07:16:10.074817 kubelet[2556]: I0213 07:16:10.074760 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-etc-cni-netd\") pod \"cilium-bp5sw\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " pod="kube-system/cilium-bp5sw" Feb 13 07:16:10.074817 kubelet[2556]: I0213 07:16:10.074774 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70dcfd5e-85fc-4afa-9db5-2bc0b2719268-xtables-lock\") pod \"kube-proxy-4mf6x\" (UID: \"70dcfd5e-85fc-4afa-9db5-2bc0b2719268\") " pod="kube-system/kube-proxy-4mf6x" Feb 13 07:16:10.074817 kubelet[2556]: I0213 07:16:10.074784 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-lib-modules\") pod \"cilium-bp5sw\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " pod="kube-system/cilium-bp5sw" Feb 13 07:16:10.074817 kubelet[2556]: I0213 07:16:10.074799 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-clustermesh-secrets\") pod \"cilium-bp5sw\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " pod="kube-system/cilium-bp5sw" Feb 13 07:16:10.074817 kubelet[2556]: I0213 07:16:10.074810 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-bpf-maps\") pod \"cilium-bp5sw\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " pod="kube-system/cilium-bp5sw" Feb 13 07:16:10.074918 kubelet[2556]: I0213 07:16:10.074823 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/70dcfd5e-85fc-4afa-9db5-2bc0b2719268-kube-proxy\") pod \"kube-proxy-4mf6x\" (UID: \"70dcfd5e-85fc-4afa-9db5-2bc0b2719268\") " pod="kube-system/kube-proxy-4mf6x" Feb 13 07:16:10.074918 kubelet[2556]: I0213 07:16:10.074837 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-host-proc-sys-kernel\") pod \"cilium-bp5sw\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " pod="kube-system/cilium-bp5sw" Feb 13 07:16:10.074918 kubelet[2556]: I0213 07:16:10.074851 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70dcfd5e-85fc-4afa-9db5-2bc0b2719268-lib-modules\") pod \"kube-proxy-4mf6x\" (UID: \"70dcfd5e-85fc-4afa-9db5-2bc0b2719268\") " pod="kube-system/kube-proxy-4mf6x" Feb 13 07:16:10.074918 kubelet[2556]: I0213 07:16:10.074866 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-xtables-lock\") pod \"cilium-bp5sw\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " pod="kube-system/cilium-bp5sw" Feb 13 07:16:10.074918 kubelet[2556]: I0213 07:16:10.074891 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-cilium-config-path\") pod \"cilium-bp5sw\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " pod="kube-system/cilium-bp5sw" Feb 13 07:16:10.076889 kubelet[2556]: I0213 07:16:10.076864 2556 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:16:10.080300 systemd[1]: Created slice kubepods-besteffort-pod24c56616_2415_4d16_9d03_0d7c06962ec2.slice. Feb 13 07:16:10.112985 kubelet[2556]: I0213 07:16:10.112971 2556 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 07:16:10.113236 env[1478]: time="2024-02-13T07:16:10.113215560Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 07:16:10.113431 kubelet[2556]: I0213 07:16:10.113362 2556 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 07:16:10.176304 kubelet[2556]: I0213 07:16:10.176232 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24c56616-2415-4d16-9d03-0d7c06962ec2-cilium-config-path\") pod \"cilium-operator-574c4bb98d-vxhgv\" (UID: \"24c56616-2415-4d16-9d03-0d7c06962ec2\") " pod="kube-system/cilium-operator-574c4bb98d-vxhgv" Feb 13 07:16:10.177046 kubelet[2556]: I0213 07:16:10.176972 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpgdn\" (UniqueName: \"kubernetes.io/projected/24c56616-2415-4d16-9d03-0d7c06962ec2-kube-api-access-lpgdn\") pod \"cilium-operator-574c4bb98d-vxhgv\" (UID: \"24c56616-2415-4d16-9d03-0d7c06962ec2\") " pod="kube-system/cilium-operator-574c4bb98d-vxhgv" Feb 13 07:16:10.340632 env[1478]: time="2024-02-13T07:16:10.340536732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4mf6x,Uid:70dcfd5e-85fc-4afa-9db5-2bc0b2719268,Namespace:kube-system,Attempt:0,}" Feb 13 07:16:10.341474 env[1478]: time="2024-02-13T07:16:10.341354745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bp5sw,Uid:f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55,Namespace:kube-system,Attempt:0,}" Feb 13 07:16:10.369528 env[1478]: time="2024-02-13T07:16:10.369323477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:16:10.369528 env[1478]: time="2024-02-13T07:16:10.369452351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:16:10.369979 env[1478]: time="2024-02-13T07:16:10.369532182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:16:10.370163 env[1478]: time="2024-02-13T07:16:10.370063945Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4ce5f155f97e1fe568226e24439bf60d14ce5cdc1464ab40160502c30d1a4a3 pid=2735 runtime=io.containerd.runc.v2 Feb 13 07:16:10.371180 env[1478]: time="2024-02-13T07:16:10.370987264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:16:10.371180 env[1478]: time="2024-02-13T07:16:10.371108584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:16:10.371568 env[1478]: time="2024-02-13T07:16:10.371172255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:16:10.371802 env[1478]: time="2024-02-13T07:16:10.371650921Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21 pid=2739 runtime=io.containerd.runc.v2 Feb 13 07:16:10.383191 env[1478]: time="2024-02-13T07:16:10.383105725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-vxhgv,Uid:24c56616-2415-4d16-9d03-0d7c06962ec2,Namespace:kube-system,Attempt:0,}" Feb 13 07:16:10.401115 systemd[1]: Started cri-containerd-2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21.scope. Feb 13 07:16:10.405115 systemd[1]: Started cri-containerd-c4ce5f155f97e1fe568226e24439bf60d14ce5cdc1464ab40160502c30d1a4a3.scope. Feb 13 07:16:10.415610 env[1478]: time="2024-02-13T07:16:10.415336354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:16:10.415610 env[1478]: time="2024-02-13T07:16:10.415487734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:16:10.415610 env[1478]: time="2024-02-13T07:16:10.415539369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:16:10.416126 env[1478]: time="2024-02-13T07:16:10.415941179Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8 pid=2788 runtime=io.containerd.runc.v2 Feb 13 07:16:10.445673 systemd[1]: Started cri-containerd-4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8.scope. Feb 13 07:16:10.450641 env[1478]: time="2024-02-13T07:16:10.450559006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bp5sw,Uid:f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\"" Feb 13 07:16:10.451034 env[1478]: time="2024-02-13T07:16:10.450997896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4mf6x,Uid:70dcfd5e-85fc-4afa-9db5-2bc0b2719268,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4ce5f155f97e1fe568226e24439bf60d14ce5cdc1464ab40160502c30d1a4a3\"" Feb 13 07:16:10.453212 env[1478]: time="2024-02-13T07:16:10.453150051Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 07:16:10.454776 env[1478]: time="2024-02-13T07:16:10.454743125Z" level=info msg="CreateContainer within sandbox \"c4ce5f155f97e1fe568226e24439bf60d14ce5cdc1464ab40160502c30d1a4a3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 07:16:10.464141 env[1478]: time="2024-02-13T07:16:10.464074200Z" level=info msg="CreateContainer within sandbox \"c4ce5f155f97e1fe568226e24439bf60d14ce5cdc1464ab40160502c30d1a4a3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6dae390f896b4a4fbd36a206898b00ae6bec24274f4c412b2fe0054dabbe1692\"" Feb 13 07:16:10.464514 env[1478]: time="2024-02-13T07:16:10.464489568Z" level=info msg="StartContainer for \"6dae390f896b4a4fbd36a206898b00ae6bec24274f4c412b2fe0054dabbe1692\"" Feb 13 07:16:10.475264 systemd[1]: Started cri-containerd-6dae390f896b4a4fbd36a206898b00ae6bec24274f4c412b2fe0054dabbe1692.scope. Feb 13 07:16:10.484327 env[1478]: time="2024-02-13T07:16:10.484300035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-vxhgv,Uid:24c56616-2415-4d16-9d03-0d7c06962ec2,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8\"" Feb 13 07:16:10.490447 env[1478]: time="2024-02-13T07:16:10.490421915Z" level=info msg="StartContainer for \"6dae390f896b4a4fbd36a206898b00ae6bec24274f4c412b2fe0054dabbe1692\" returns successfully" Feb 13 07:16:10.535766 kubelet[2556]: I0213 07:16:10.535746 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4mf6x" podStartSLOduration=0.535721946 podCreationTimestamp="2024-02-13 07:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 07:16:10.535563964 +0000 UTC m=+15.099089680" watchObservedRunningTime="2024-02-13 07:16:10.535721946 +0000 UTC m=+15.099247662" Feb 13 07:16:14.127213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2926936938.mount: Deactivated successfully. Feb 13 07:16:15.835623 env[1478]: time="2024-02-13T07:16:15.835540667Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:15.836728 env[1478]: time="2024-02-13T07:16:15.836659406Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:15.838487 env[1478]: time="2024-02-13T07:16:15.838431754Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:15.839929 env[1478]: time="2024-02-13T07:16:15.839868759Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 07:16:15.840370 env[1478]: time="2024-02-13T07:16:15.840323184Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 07:16:15.841449 env[1478]: time="2024-02-13T07:16:15.841408611Z" level=info msg="CreateContainer within sandbox \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 07:16:15.847309 env[1478]: time="2024-02-13T07:16:15.847286089Z" level=info msg="CreateContainer within sandbox \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4\"" Feb 13 07:16:15.847570 env[1478]: time="2024-02-13T07:16:15.847522824Z" level=info msg="StartContainer for \"e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4\"" Feb 13 07:16:15.858107 systemd[1]: Started cri-containerd-e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4.scope. Feb 13 07:16:15.871193 env[1478]: time="2024-02-13T07:16:15.871163640Z" level=info msg="StartContainer for \"e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4\" returns successfully" Feb 13 07:16:15.877287 systemd[1]: cri-containerd-e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4.scope: Deactivated successfully. Feb 13 07:16:16.849475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4-rootfs.mount: Deactivated successfully. Feb 13 07:16:17.019336 env[1478]: time="2024-02-13T07:16:17.019236197Z" level=info msg="shim disconnected" id=e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4 Feb 13 07:16:17.020118 env[1478]: time="2024-02-13T07:16:17.019338305Z" level=warning msg="cleaning up after shim disconnected" id=e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4 namespace=k8s.io Feb 13 07:16:17.020118 env[1478]: time="2024-02-13T07:16:17.019367539Z" level=info msg="cleaning up dead shim" Feb 13 07:16:17.034877 env[1478]: time="2024-02-13T07:16:17.034760746Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:16:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3058 runtime=io.containerd.runc.v2\n" Feb 13 07:16:17.554162 env[1478]: time="2024-02-13T07:16:17.554041626Z" level=info msg="CreateContainer within sandbox \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 07:16:17.564195 env[1478]: time="2024-02-13T07:16:17.564171182Z" level=info msg="CreateContainer within sandbox \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a\"" Feb 13 07:16:17.564505 env[1478]: time="2024-02-13T07:16:17.564489107Z" level=info msg="StartContainer for \"0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a\"" Feb 13 07:16:17.564543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2718671639.mount: Deactivated successfully. Feb 13 07:16:17.571948 systemd[1]: Started cri-containerd-0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a.scope. Feb 13 07:16:17.584276 env[1478]: time="2024-02-13T07:16:17.584248691Z" level=info msg="StartContainer for \"0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a\" returns successfully" Feb 13 07:16:17.590534 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 07:16:17.590666 systemd[1]: Stopped systemd-sysctl.service. Feb 13 07:16:17.590753 systemd[1]: Stopping systemd-sysctl.service... Feb 13 07:16:17.591574 systemd[1]: Starting systemd-sysctl.service... Feb 13 07:16:17.592248 systemd[1]: cri-containerd-0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a.scope: Deactivated successfully. Feb 13 07:16:17.595579 systemd[1]: Finished systemd-sysctl.service. Feb 13 07:16:17.602363 env[1478]: time="2024-02-13T07:16:17.602338123Z" level=info msg="shim disconnected" id=0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a Feb 13 07:16:17.602471 env[1478]: time="2024-02-13T07:16:17.602365798Z" level=warning msg="cleaning up after shim disconnected" id=0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a namespace=k8s.io Feb 13 07:16:17.602471 env[1478]: time="2024-02-13T07:16:17.602375322Z" level=info msg="cleaning up dead shim" Feb 13 07:16:17.605768 env[1478]: time="2024-02-13T07:16:17.605750343Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:16:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3122 runtime=io.containerd.runc.v2\n" Feb 13 07:16:17.846417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a-rootfs.mount: Deactivated successfully. Feb 13 07:16:18.099000 env[1478]: time="2024-02-13T07:16:18.098928313Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:18.099620 env[1478]: time="2024-02-13T07:16:18.099579879Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:18.100212 env[1478]: time="2024-02-13T07:16:18.100172323Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:16:18.100863 env[1478]: time="2024-02-13T07:16:18.100820044Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 07:16:18.101835 env[1478]: time="2024-02-13T07:16:18.101801919Z" level=info msg="CreateContainer within sandbox \"4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 07:16:18.106694 env[1478]: time="2024-02-13T07:16:18.106676987Z" level=info msg="CreateContainer within sandbox \"4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372\"" Feb 13 07:16:18.107083 env[1478]: time="2024-02-13T07:16:18.107038836Z" level=info msg="StartContainer for \"d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372\"" Feb 13 07:16:18.115809 systemd[1]: Started cri-containerd-d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372.scope. Feb 13 07:16:18.128809 env[1478]: time="2024-02-13T07:16:18.128753587Z" level=info msg="StartContainer for \"d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372\" returns successfully" Feb 13 07:16:18.553056 env[1478]: time="2024-02-13T07:16:18.553025031Z" level=info msg="CreateContainer within sandbox \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 07:16:18.558998 env[1478]: time="2024-02-13T07:16:18.558971539Z" level=info msg="CreateContainer within sandbox \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b\"" Feb 13 07:16:18.559304 env[1478]: time="2024-02-13T07:16:18.559287067Z" level=info msg="StartContainer for \"3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b\"" Feb 13 07:16:18.561715 kubelet[2556]: I0213 07:16:18.561692 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-vxhgv" podStartSLOduration=0.9455838 podCreationTimestamp="2024-02-13 07:16:10 +0000 UTC" firstStartedPulling="2024-02-13 07:16:10.484881183 +0000 UTC m=+15.048406899" lastFinishedPulling="2024-02-13 07:16:18.100961052 +0000 UTC m=+22.664486767" observedRunningTime="2024-02-13 07:16:18.560651835 +0000 UTC m=+23.124177559" watchObservedRunningTime="2024-02-13 07:16:18.561663668 +0000 UTC m=+23.125189383" Feb 13 07:16:18.571103 systemd[1]: Started cri-containerd-3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b.scope. Feb 13 07:16:18.588343 env[1478]: time="2024-02-13T07:16:18.588313059Z" level=info msg="StartContainer for \"3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b\" returns successfully" Feb 13 07:16:18.590232 systemd[1]: cri-containerd-3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b.scope: Deactivated successfully. Feb 13 07:16:18.755017 env[1478]: time="2024-02-13T07:16:18.754951165Z" level=info msg="shim disconnected" id=3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b Feb 13 07:16:18.755017 env[1478]: time="2024-02-13T07:16:18.754997776Z" level=warning msg="cleaning up after shim disconnected" id=3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b namespace=k8s.io Feb 13 07:16:18.755017 env[1478]: time="2024-02-13T07:16:18.755015913Z" level=info msg="cleaning up dead shim" Feb 13 07:16:18.762372 env[1478]: time="2024-02-13T07:16:18.762331240Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:16:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3226 runtime=io.containerd.runc.v2\n" Feb 13 07:16:19.564750 env[1478]: time="2024-02-13T07:16:19.564652656Z" level=info msg="CreateContainer within sandbox \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 07:16:19.584484 env[1478]: time="2024-02-13T07:16:19.584335339Z" level=info msg="CreateContainer within sandbox \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba\"" Feb 13 07:16:19.585535 env[1478]: time="2024-02-13T07:16:19.585439253Z" level=info msg="StartContainer for \"5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba\"" Feb 13 07:16:19.606064 systemd[1]: Started cri-containerd-5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba.scope. Feb 13 07:16:19.621080 env[1478]: time="2024-02-13T07:16:19.621020420Z" level=info msg="StartContainer for \"5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba\" returns successfully" Feb 13 07:16:19.621602 systemd[1]: cri-containerd-5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba.scope: Deactivated successfully. Feb 13 07:16:19.632957 env[1478]: time="2024-02-13T07:16:19.632921119Z" level=info msg="shim disconnected" id=5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba Feb 13 07:16:19.633077 env[1478]: time="2024-02-13T07:16:19.632957074Z" level=warning msg="cleaning up after shim disconnected" id=5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba namespace=k8s.io Feb 13 07:16:19.633077 env[1478]: time="2024-02-13T07:16:19.632965884Z" level=info msg="cleaning up dead shim" Feb 13 07:16:19.637348 env[1478]: time="2024-02-13T07:16:19.637300611Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:16:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3280 runtime=io.containerd.runc.v2\n" Feb 13 07:16:19.846630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba-rootfs.mount: Deactivated successfully. Feb 13 07:16:20.574761 env[1478]: time="2024-02-13T07:16:20.574610916Z" level=info msg="CreateContainer within sandbox \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 07:16:20.594278 env[1478]: time="2024-02-13T07:16:20.594153721Z" level=info msg="CreateContainer within sandbox \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5\"" Feb 13 07:16:20.595191 env[1478]: time="2024-02-13T07:16:20.595073411Z" level=info msg="StartContainer for \"1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5\"" Feb 13 07:16:20.625377 systemd[1]: Started cri-containerd-1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5.scope. Feb 13 07:16:20.649256 env[1478]: time="2024-02-13T07:16:20.649200494Z" level=info msg="StartContainer for \"1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5\" returns successfully" Feb 13 07:16:20.728443 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 07:16:20.794905 kubelet[2556]: I0213 07:16:20.794890 2556 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 13 07:16:20.805794 kubelet[2556]: I0213 07:16:20.805772 2556 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:16:20.806635 kubelet[2556]: I0213 07:16:20.806624 2556 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:16:20.809427 systemd[1]: Created slice kubepods-burstable-pod76817695_a0b9_4c74_a36d_bbe6bde5584d.slice. Feb 13 07:16:20.812123 systemd[1]: Created slice kubepods-burstable-podb1b9e49d_a55c_416e_a34a_39ebd08d9c51.slice. Feb 13 07:16:20.852096 kubelet[2556]: I0213 07:16:20.852045 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76817695-a0b9-4c74-a36d-bbe6bde5584d-config-volume\") pod \"coredns-5d78c9869d-mvdhc\" (UID: \"76817695-a0b9-4c74-a36d-bbe6bde5584d\") " pod="kube-system/coredns-5d78c9869d-mvdhc" Feb 13 07:16:20.852096 kubelet[2556]: I0213 07:16:20.852071 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkpp6\" (UniqueName: \"kubernetes.io/projected/b1b9e49d-a55c-416e-a34a-39ebd08d9c51-kube-api-access-gkpp6\") pod \"coredns-5d78c9869d-zngxn\" (UID: \"b1b9e49d-a55c-416e-a34a-39ebd08d9c51\") " pod="kube-system/coredns-5d78c9869d-zngxn" Feb 13 07:16:20.852096 kubelet[2556]: I0213 07:16:20.852084 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1b9e49d-a55c-416e-a34a-39ebd08d9c51-config-volume\") pod \"coredns-5d78c9869d-zngxn\" (UID: \"b1b9e49d-a55c-416e-a34a-39ebd08d9c51\") " pod="kube-system/coredns-5d78c9869d-zngxn" Feb 13 07:16:20.852216 kubelet[2556]: I0213 07:16:20.852149 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khn2s\" (UniqueName: \"kubernetes.io/projected/76817695-a0b9-4c74-a36d-bbe6bde5584d-kube-api-access-khn2s\") pod \"coredns-5d78c9869d-mvdhc\" (UID: \"76817695-a0b9-4c74-a36d-bbe6bde5584d\") " pod="kube-system/coredns-5d78c9869d-mvdhc" Feb 13 07:16:20.865400 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 07:16:21.112904 env[1478]: time="2024-02-13T07:16:21.112659737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-mvdhc,Uid:76817695-a0b9-4c74-a36d-bbe6bde5584d,Namespace:kube-system,Attempt:0,}" Feb 13 07:16:21.114855 env[1478]: time="2024-02-13T07:16:21.114725584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-zngxn,Uid:b1b9e49d-a55c-416e-a34a-39ebd08d9c51,Namespace:kube-system,Attempt:0,}" Feb 13 07:16:21.597151 kubelet[2556]: I0213 07:16:21.597122 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bp5sw" podStartSLOduration=6.208855927 podCreationTimestamp="2024-02-13 07:16:10 +0000 UTC" firstStartedPulling="2024-02-13 07:16:10.451913241 +0000 UTC m=+15.015438991" lastFinishedPulling="2024-02-13 07:16:15.840127079 +0000 UTC m=+20.403652812" observedRunningTime="2024-02-13 07:16:21.596545892 +0000 UTC m=+26.160071608" watchObservedRunningTime="2024-02-13 07:16:21.597069748 +0000 UTC m=+26.160595460" Feb 13 07:16:22.457203 systemd-networkd[1334]: cilium_host: Link UP Feb 13 07:16:22.457288 systemd-networkd[1334]: cilium_net: Link UP Feb 13 07:16:22.464457 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 13 07:16:22.464556 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 13 07:16:22.472062 systemd-networkd[1334]: cilium_net: Gained carrier Feb 13 07:16:22.472200 systemd-networkd[1334]: cilium_host: Gained carrier Feb 13 07:16:22.516920 systemd-networkd[1334]: cilium_vxlan: Link UP Feb 13 07:16:22.516925 systemd-networkd[1334]: cilium_vxlan: Gained carrier Feb 13 07:16:22.562486 systemd-networkd[1334]: cilium_net: Gained IPv6LL Feb 13 07:16:22.653397 kernel: NET: Registered PF_ALG protocol family Feb 13 07:16:22.994539 systemd-networkd[1334]: cilium_host: Gained IPv6LL Feb 13 07:16:23.180783 systemd-networkd[1334]: lxc_health: Link UP Feb 13 07:16:23.202174 systemd-networkd[1334]: lxc_health: Gained carrier Feb 13 07:16:23.202393 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 07:16:23.679493 kernel: eth0: renamed from tmp5159d Feb 13 07:16:23.696638 systemd-networkd[1334]: lxc40f8b15791dc: Link UP Feb 13 07:16:23.711597 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 13 07:16:23.711700 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc40f8b15791dc: link becomes ready Feb 13 07:16:23.711966 systemd-networkd[1334]: lxc40f8b15791dc: Gained carrier Feb 13 07:16:23.730987 systemd-networkd[1334]: lxc2d29433eb4f9: Link UP Feb 13 07:16:23.735464 kernel: eth0: renamed from tmp53c05 Feb 13 07:16:23.749407 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2d29433eb4f9: link becomes ready Feb 13 07:16:23.749429 systemd-networkd[1334]: tmp53c05: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 07:16:23.749480 systemd-networkd[1334]: tmp53c05: Cannot enable IPv6, ignoring: No such file or directory Feb 13 07:16:23.749500 systemd-networkd[1334]: tmp53c05: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory Feb 13 07:16:23.749507 systemd-networkd[1334]: tmp53c05: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory Feb 13 07:16:23.749513 systemd-networkd[1334]: tmp53c05: Cannot set IPv6 proxy NDP, ignoring: No such file or directory Feb 13 07:16:23.749522 systemd-networkd[1334]: tmp53c05: Cannot enable promote_secondaries for interface, ignoring: No such file or directory Feb 13 07:16:23.749711 systemd-networkd[1334]: lxc2d29433eb4f9: Gained carrier Feb 13 07:16:24.146494 systemd-networkd[1334]: cilium_vxlan: Gained IPv6LL Feb 13 07:16:24.274501 systemd-networkd[1334]: lxc_health: Gained IPv6LL Feb 13 07:16:24.914651 systemd-networkd[1334]: lxc2d29433eb4f9: Gained IPv6LL Feb 13 07:16:25.554505 systemd-networkd[1334]: lxc40f8b15791dc: Gained IPv6LL Feb 13 07:16:26.051403 env[1478]: time="2024-02-13T07:16:26.051357138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:16:26.051403 env[1478]: time="2024-02-13T07:16:26.051378727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:16:26.051403 env[1478]: time="2024-02-13T07:16:26.051391481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:16:26.051403 env[1478]: time="2024-02-13T07:16:26.051394167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:16:26.051663 env[1478]: time="2024-02-13T07:16:26.051412175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:16:26.051663 env[1478]: time="2024-02-13T07:16:26.051419175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:16:26.051663 env[1478]: time="2024-02-13T07:16:26.051473014Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5159d2ea1b70cc30a5dba7daf87b21951d7e795d626aa923150b1ec9f851fc7c pid=3973 runtime=io.containerd.runc.v2 Feb 13 07:16:26.051663 env[1478]: time="2024-02-13T07:16:26.051459396Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53c058e55bfcea2670d4b391187c3a1b7af412dd4aa6092fc5aeaf2fadd9fde6 pid=3972 runtime=io.containerd.runc.v2 Feb 13 07:16:26.059730 systemd[1]: Started cri-containerd-5159d2ea1b70cc30a5dba7daf87b21951d7e795d626aa923150b1ec9f851fc7c.scope. Feb 13 07:16:26.060388 systemd[1]: Started cri-containerd-53c058e55bfcea2670d4b391187c3a1b7af412dd4aa6092fc5aeaf2fadd9fde6.scope. Feb 13 07:16:26.082635 env[1478]: time="2024-02-13T07:16:26.082590068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-zngxn,Uid:b1b9e49d-a55c-416e-a34a-39ebd08d9c51,Namespace:kube-system,Attempt:0,} returns sandbox id \"53c058e55bfcea2670d4b391187c3a1b7af412dd4aa6092fc5aeaf2fadd9fde6\"" Feb 13 07:16:26.082833 env[1478]: time="2024-02-13T07:16:26.082815442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-mvdhc,Uid:76817695-a0b9-4c74-a36d-bbe6bde5584d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5159d2ea1b70cc30a5dba7daf87b21951d7e795d626aa923150b1ec9f851fc7c\"" Feb 13 07:16:26.084722 env[1478]: time="2024-02-13T07:16:26.084706393Z" level=info msg="CreateContainer within sandbox \"53c058e55bfcea2670d4b391187c3a1b7af412dd4aa6092fc5aeaf2fadd9fde6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 07:16:26.084722 env[1478]: time="2024-02-13T07:16:26.084706451Z" level=info msg="CreateContainer within sandbox \"5159d2ea1b70cc30a5dba7daf87b21951d7e795d626aa923150b1ec9f851fc7c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 07:16:26.089659 env[1478]: time="2024-02-13T07:16:26.089639512Z" level=info msg="CreateContainer within sandbox \"53c058e55bfcea2670d4b391187c3a1b7af412dd4aa6092fc5aeaf2fadd9fde6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3c8f70d94bb32a4840d97fa9febdceb4de44522f809f47fe3ac17f68e03790a7\"" Feb 13 07:16:26.089907 env[1478]: time="2024-02-13T07:16:26.089861037Z" level=info msg="StartContainer for \"3c8f70d94bb32a4840d97fa9febdceb4de44522f809f47fe3ac17f68e03790a7\"" Feb 13 07:16:26.090598 env[1478]: time="2024-02-13T07:16:26.090581071Z" level=info msg="CreateContainer within sandbox \"5159d2ea1b70cc30a5dba7daf87b21951d7e795d626aa923150b1ec9f851fc7c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"719bf99ed83303ef990ea6e995c1607e97efa575988f193116283d4b0775a139\"" Feb 13 07:16:26.090798 env[1478]: time="2024-02-13T07:16:26.090784571Z" level=info msg="StartContainer for \"719bf99ed83303ef990ea6e995c1607e97efa575988f193116283d4b0775a139\"" Feb 13 07:16:26.110020 systemd[1]: Started cri-containerd-3c8f70d94bb32a4840d97fa9febdceb4de44522f809f47fe3ac17f68e03790a7.scope. Feb 13 07:16:26.111435 systemd[1]: Started cri-containerd-719bf99ed83303ef990ea6e995c1607e97efa575988f193116283d4b0775a139.scope. Feb 13 07:16:26.124497 env[1478]: time="2024-02-13T07:16:26.124472695Z" level=info msg="StartContainer for \"3c8f70d94bb32a4840d97fa9febdceb4de44522f809f47fe3ac17f68e03790a7\" returns successfully" Feb 13 07:16:26.124625 env[1478]: time="2024-02-13T07:16:26.124582423Z" level=info msg="StartContainer for \"719bf99ed83303ef990ea6e995c1607e97efa575988f193116283d4b0775a139\" returns successfully" Feb 13 07:16:26.610943 kubelet[2556]: I0213 07:16:26.610889 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-zngxn" podStartSLOduration=16.610808871 podCreationTimestamp="2024-02-13 07:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 07:16:26.609654928 +0000 UTC m=+31.173180754" watchObservedRunningTime="2024-02-13 07:16:26.610808871 +0000 UTC m=+31.174334648" Feb 13 07:16:26.649243 kubelet[2556]: I0213 07:16:26.649175 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-mvdhc" podStartSLOduration=16.649047616 podCreationTimestamp="2024-02-13 07:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 07:16:26.629616524 +0000 UTC m=+31.193142352" watchObservedRunningTime="2024-02-13 07:16:26.649047616 +0000 UTC m=+31.212573387" Feb 13 07:21:02.001495 systemd[1]: Started sshd@5-145.40.90.207:22-218.92.0.59:31390.service. Feb 13 07:21:02.151157 sshd[4188]: Unable to negotiate with 218.92.0.59 port 31390: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Feb 13 07:21:02.153053 systemd[1]: sshd@5-145.40.90.207:22-218.92.0.59:31390.service: Deactivated successfully. Feb 13 07:25:57.937125 update_engine[1468]: I0213 07:25:57.937011 1468 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 07:25:57.937125 update_engine[1468]: I0213 07:25:57.937093 1468 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 07:25:57.938809 update_engine[1468]: I0213 07:25:57.938716 1468 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 07:25:57.939695 update_engine[1468]: I0213 07:25:57.939601 1468 omaha_request_params.cc:62] Current group set to lts Feb 13 07:25:57.939930 update_engine[1468]: I0213 07:25:57.939898 1468 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 07:25:57.939930 update_engine[1468]: I0213 07:25:57.939919 1468 update_attempter.cc:643] Scheduling an action processor start. Feb 13 07:25:57.940266 update_engine[1468]: I0213 07:25:57.939952 1468 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 07:25:57.940266 update_engine[1468]: I0213 07:25:57.940018 1468 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 07:25:57.940266 update_engine[1468]: I0213 07:25:57.940160 1468 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 13 07:25:57.940266 update_engine[1468]: I0213 07:25:57.940177 1468 omaha_request_action.cc:271] Request: Feb 13 07:25:57.940266 update_engine[1468]: Feb 13 07:25:57.940266 update_engine[1468]: Feb 13 07:25:57.940266 update_engine[1468]: Feb 13 07:25:57.940266 update_engine[1468]: Feb 13 07:25:57.940266 update_engine[1468]: Feb 13 07:25:57.940266 update_engine[1468]: Feb 13 07:25:57.940266 update_engine[1468]: Feb 13 07:25:57.940266 update_engine[1468]: Feb 13 07:25:57.940266 update_engine[1468]: I0213 07:25:57.940187 1468 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 07:25:57.941769 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 07:25:57.943414 update_engine[1468]: I0213 07:25:57.943312 1468 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 07:25:57.943595 update_engine[1468]: E0213 07:25:57.943557 1468 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 07:25:57.943779 update_engine[1468]: I0213 07:25:57.943730 1468 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 07:26:07.930895 update_engine[1468]: I0213 07:26:07.930776 1468 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 07:26:07.932744 update_engine[1468]: I0213 07:26:07.931252 1468 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 07:26:07.932744 update_engine[1468]: E0213 07:26:07.931483 1468 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 07:26:07.932744 update_engine[1468]: I0213 07:26:07.931661 1468 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 07:26:17.940012 update_engine[1468]: I0213 07:26:17.939894 1468 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 07:26:17.940843 update_engine[1468]: I0213 07:26:17.940371 1468 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 07:26:17.940843 update_engine[1468]: E0213 07:26:17.940606 1468 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 07:26:17.940843 update_engine[1468]: I0213 07:26:17.940783 1468 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 07:26:27.931719 update_engine[1468]: I0213 07:26:27.931600 1468 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 07:26:27.932595 update_engine[1468]: I0213 07:26:27.932080 1468 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 07:26:27.932595 update_engine[1468]: E0213 07:26:27.932286 1468 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 07:26:27.932595 update_engine[1468]: I0213 07:26:27.932480 1468 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 07:26:27.932595 update_engine[1468]: I0213 07:26:27.932500 1468 omaha_request_action.cc:621] Omaha request response: Feb 13 07:26:27.933019 update_engine[1468]: E0213 07:26:27.932647 1468 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 13 07:26:27.933019 update_engine[1468]: I0213 07:26:27.932675 1468 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 07:26:27.933019 update_engine[1468]: I0213 07:26:27.932684 1468 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 07:26:27.933019 update_engine[1468]: I0213 07:26:27.932693 1468 update_attempter.cc:306] Processing Done. Feb 13 07:26:27.933019 update_engine[1468]: E0213 07:26:27.932718 1468 update_attempter.cc:619] Update failed. Feb 13 07:26:27.933019 update_engine[1468]: I0213 07:26:27.932727 1468 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 07:26:27.933019 update_engine[1468]: I0213 07:26:27.932735 1468 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 07:26:27.933019 update_engine[1468]: I0213 07:26:27.932745 1468 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 07:26:27.933019 update_engine[1468]: I0213 07:26:27.932891 1468 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 07:26:27.933019 update_engine[1468]: I0213 07:26:27.932941 1468 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 13 07:26:27.933019 update_engine[1468]: I0213 07:26:27.932951 1468 omaha_request_action.cc:271] Request: Feb 13 07:26:27.933019 update_engine[1468]: Feb 13 07:26:27.933019 update_engine[1468]: Feb 13 07:26:27.933019 update_engine[1468]: Feb 13 07:26:27.933019 update_engine[1468]: Feb 13 07:26:27.933019 update_engine[1468]: Feb 13 07:26:27.933019 update_engine[1468]: Feb 13 07:26:27.933019 update_engine[1468]: I0213 07:26:27.932960 1468 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 07:26:27.934706 update_engine[1468]: I0213 07:26:27.933243 1468 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 07:26:27.934706 update_engine[1468]: E0213 07:26:27.933432 1468 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 07:26:27.934706 update_engine[1468]: I0213 07:26:27.933570 1468 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 07:26:27.934706 update_engine[1468]: I0213 07:26:27.933584 1468 omaha_request_action.cc:621] Omaha request response: Feb 13 07:26:27.934706 update_engine[1468]: I0213 07:26:27.933593 1468 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 07:26:27.934706 update_engine[1468]: I0213 07:26:27.933601 1468 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 07:26:27.934706 update_engine[1468]: I0213 07:26:27.933609 1468 update_attempter.cc:306] Processing Done. Feb 13 07:26:27.934706 update_engine[1468]: I0213 07:26:27.933616 1468 update_attempter.cc:310] Error event sent. Feb 13 07:26:27.934706 update_engine[1468]: I0213 07:26:27.933636 1468 update_check_scheduler.cc:74] Next update check in 43m34s Feb 13 07:26:27.935499 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 07:26:27.935499 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 07:29:36.377737 systemd[1]: Started sshd@6-145.40.90.207:22-141.98.11.11:29558.service. Feb 13 07:29:39.497995 sshd[4248]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.11 user=operator Feb 13 07:29:41.213706 sshd[4248]: Failed password for operator from 141.98.11.11 port 29558 ssh2 Feb 13 07:29:42.403441 sshd[4248]: Connection closed by authenticating user operator 141.98.11.11 port 29558 [preauth] Feb 13 07:29:42.405972 systemd[1]: sshd@6-145.40.90.207:22-141.98.11.11:29558.service: Deactivated successfully. Feb 13 07:29:55.578790 systemd[1]: Starting systemd-tmpfiles-clean.service... Feb 13 07:29:55.584587 systemd-tmpfiles[4255]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 13 07:29:55.584807 systemd-tmpfiles[4255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 07:29:55.585486 systemd-tmpfiles[4255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 07:29:55.595710 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Feb 13 07:29:55.595802 systemd[1]: Finished systemd-tmpfiles-clean.service. Feb 13 07:29:55.596839 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Feb 13 07:33:39.459950 systemd[1]: Started sshd@7-145.40.90.207:22-218.92.0.45:14424.service. Feb 13 07:33:39.630788 sshd[4283]: Unable to negotiate with 218.92.0.45 port 14424: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Feb 13 07:33:39.632691 systemd[1]: sshd@7-145.40.90.207:22-218.92.0.45:14424.service: Deactivated successfully. Feb 13 07:36:41.011738 systemd[1]: Started sshd@8-145.40.90.207:22-139.178.68.195:46102.service. Feb 13 07:36:41.047215 sshd[4311]: Accepted publickey for core from 139.178.68.195 port 46102 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:36:41.050530 sshd[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:36:41.061781 systemd-logind[1466]: New session 8 of user core. Feb 13 07:36:41.064240 systemd[1]: Started session-8.scope. Feb 13 07:36:41.209291 sshd[4311]: pam_unix(sshd:session): session closed for user core Feb 13 07:36:41.210803 systemd[1]: sshd@8-145.40.90.207:22-139.178.68.195:46102.service: Deactivated successfully. Feb 13 07:36:41.211262 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 07:36:41.211667 systemd-logind[1466]: Session 8 logged out. Waiting for processes to exit. Feb 13 07:36:41.212134 systemd-logind[1466]: Removed session 8. Feb 13 07:36:46.218612 systemd[1]: Started sshd@9-145.40.90.207:22-139.178.68.195:59634.service. Feb 13 07:36:46.253221 sshd[4341]: Accepted publickey for core from 139.178.68.195 port 59634 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:36:46.256438 sshd[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:36:46.266754 systemd-logind[1466]: New session 9 of user core. Feb 13 07:36:46.269289 systemd[1]: Started session-9.scope. Feb 13 07:36:46.375597 sshd[4341]: pam_unix(sshd:session): session closed for user core Feb 13 07:36:46.377131 systemd[1]: sshd@9-145.40.90.207:22-139.178.68.195:59634.service: Deactivated successfully. Feb 13 07:36:46.377587 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 07:36:46.378019 systemd-logind[1466]: Session 9 logged out. Waiting for processes to exit. Feb 13 07:36:46.378615 systemd-logind[1466]: Removed session 9. Feb 13 07:36:51.385434 systemd[1]: Started sshd@10-145.40.90.207:22-139.178.68.195:59640.service. Feb 13 07:36:51.418981 sshd[4367]: Accepted publickey for core from 139.178.68.195 port 59640 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:36:51.420013 sshd[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:36:51.423301 systemd-logind[1466]: New session 10 of user core. Feb 13 07:36:51.424172 systemd[1]: Started session-10.scope. Feb 13 07:36:51.514585 sshd[4367]: pam_unix(sshd:session): session closed for user core Feb 13 07:36:51.516045 systemd[1]: sshd@10-145.40.90.207:22-139.178.68.195:59640.service: Deactivated successfully. Feb 13 07:36:51.516482 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 07:36:51.516902 systemd-logind[1466]: Session 10 logged out. Waiting for processes to exit. Feb 13 07:36:51.517361 systemd-logind[1466]: Removed session 10. Feb 13 07:36:56.524642 systemd[1]: Started sshd@11-145.40.90.207:22-139.178.68.195:37650.service. Feb 13 07:36:56.558360 sshd[4396]: Accepted publickey for core from 139.178.68.195 port 37650 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:36:56.559424 sshd[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:36:56.563146 systemd-logind[1466]: New session 11 of user core. Feb 13 07:36:56.564019 systemd[1]: Started session-11.scope. Feb 13 07:36:56.654777 sshd[4396]: pam_unix(sshd:session): session closed for user core Feb 13 07:36:56.656508 systemd[1]: sshd@11-145.40.90.207:22-139.178.68.195:37650.service: Deactivated successfully. Feb 13 07:36:56.656843 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 07:36:56.657188 systemd-logind[1466]: Session 11 logged out. Waiting for processes to exit. Feb 13 07:36:56.657796 systemd[1]: Started sshd@12-145.40.90.207:22-139.178.68.195:37666.service. Feb 13 07:36:56.658231 systemd-logind[1466]: Removed session 11. Feb 13 07:36:56.691364 sshd[4422]: Accepted publickey for core from 139.178.68.195 port 37666 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:36:56.692146 sshd[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:36:56.694973 systemd-logind[1466]: New session 12 of user core. Feb 13 07:36:56.695520 systemd[1]: Started session-12.scope. Feb 13 07:36:57.104198 sshd[4422]: pam_unix(sshd:session): session closed for user core Feb 13 07:36:57.106024 systemd[1]: sshd@12-145.40.90.207:22-139.178.68.195:37666.service: Deactivated successfully. Feb 13 07:36:57.106411 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 07:36:57.106798 systemd-logind[1466]: Session 12 logged out. Waiting for processes to exit. Feb 13 07:36:57.107419 systemd[1]: Started sshd@13-145.40.90.207:22-139.178.68.195:37668.service. Feb 13 07:36:57.107958 systemd-logind[1466]: Removed session 12. Feb 13 07:36:57.141777 sshd[4447]: Accepted publickey for core from 139.178.68.195 port 37668 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:36:57.145188 sshd[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:36:57.156079 systemd-logind[1466]: New session 13 of user core. Feb 13 07:36:57.158644 systemd[1]: Started session-13.scope. Feb 13 07:36:57.305567 sshd[4447]: pam_unix(sshd:session): session closed for user core Feb 13 07:36:57.307081 systemd[1]: sshd@13-145.40.90.207:22-139.178.68.195:37668.service: Deactivated successfully. Feb 13 07:36:57.307527 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 07:36:57.307889 systemd-logind[1466]: Session 13 logged out. Waiting for processes to exit. Feb 13 07:36:57.308304 systemd-logind[1466]: Removed session 13. Feb 13 07:37:02.314958 systemd[1]: Started sshd@14-145.40.90.207:22-139.178.68.195:37672.service. Feb 13 07:37:02.349342 sshd[4474]: Accepted publickey for core from 139.178.68.195 port 37672 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:37:02.352589 sshd[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:37:02.363165 systemd-logind[1466]: New session 14 of user core. Feb 13 07:37:02.365658 systemd[1]: Started session-14.scope. Feb 13 07:37:02.466598 sshd[4474]: pam_unix(sshd:session): session closed for user core Feb 13 07:37:02.468080 systemd[1]: sshd@14-145.40.90.207:22-139.178.68.195:37672.service: Deactivated successfully. Feb 13 07:37:02.468502 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 07:37:02.468852 systemd-logind[1466]: Session 14 logged out. Waiting for processes to exit. Feb 13 07:37:02.469293 systemd-logind[1466]: Removed session 14. Feb 13 07:37:07.475220 systemd[1]: Started sshd@15-145.40.90.207:22-139.178.68.195:34786.service. Feb 13 07:37:07.508772 sshd[4498]: Accepted publickey for core from 139.178.68.195 port 34786 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:37:07.509840 sshd[4498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:37:07.512995 systemd-logind[1466]: New session 15 of user core. Feb 13 07:37:07.513747 systemd[1]: Started session-15.scope. Feb 13 07:37:07.601605 sshd[4498]: pam_unix(sshd:session): session closed for user core Feb 13 07:37:07.603083 systemd[1]: sshd@15-145.40.90.207:22-139.178.68.195:34786.service: Deactivated successfully. Feb 13 07:37:07.603539 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 07:37:07.603954 systemd-logind[1466]: Session 15 logged out. Waiting for processes to exit. Feb 13 07:37:07.604407 systemd-logind[1466]: Removed session 15. Feb 13 07:37:12.611538 systemd[1]: Started sshd@16-145.40.90.207:22-139.178.68.195:34798.service. Feb 13 07:37:12.646186 sshd[4526]: Accepted publickey for core from 139.178.68.195 port 34798 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:37:12.649337 sshd[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:37:12.659887 systemd-logind[1466]: New session 16 of user core. Feb 13 07:37:12.662502 systemd[1]: Started session-16.scope. Feb 13 07:37:12.769163 sshd[4526]: pam_unix(sshd:session): session closed for user core Feb 13 07:37:12.770957 systemd[1]: sshd@16-145.40.90.207:22-139.178.68.195:34798.service: Deactivated successfully. Feb 13 07:37:12.771316 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 07:37:12.771707 systemd-logind[1466]: Session 16 logged out. Waiting for processes to exit. Feb 13 07:37:12.772270 systemd[1]: Started sshd@17-145.40.90.207:22-139.178.68.195:34808.service. Feb 13 07:37:12.772714 systemd-logind[1466]: Removed session 16. Feb 13 07:37:12.806125 sshd[4551]: Accepted publickey for core from 139.178.68.195 port 34808 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:37:12.806900 sshd[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:37:12.809637 systemd-logind[1466]: New session 17 of user core. Feb 13 07:37:12.810269 systemd[1]: Started session-17.scope. Feb 13 07:37:13.784282 sshd[4551]: pam_unix(sshd:session): session closed for user core Feb 13 07:37:13.791450 systemd[1]: sshd@17-145.40.90.207:22-139.178.68.195:34808.service: Deactivated successfully. Feb 13 07:37:13.792243 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 07:37:13.792728 systemd-logind[1466]: Session 17 logged out. Waiting for processes to exit. Feb 13 07:37:13.793237 systemd[1]: Started sshd@18-145.40.90.207:22-139.178.68.195:34820.service. Feb 13 07:37:13.793835 systemd-logind[1466]: Removed session 17. Feb 13 07:37:13.826665 sshd[4574]: Accepted publickey for core from 139.178.68.195 port 34820 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:37:13.827704 sshd[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:37:13.831178 systemd-logind[1466]: New session 18 of user core. Feb 13 07:37:13.832080 systemd[1]: Started session-18.scope. Feb 13 07:37:14.624890 sshd[4574]: pam_unix(sshd:session): session closed for user core Feb 13 07:37:14.626686 systemd[1]: sshd@18-145.40.90.207:22-139.178.68.195:34820.service: Deactivated successfully. Feb 13 07:37:14.627010 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 07:37:14.627300 systemd-logind[1466]: Session 18 logged out. Waiting for processes to exit. Feb 13 07:37:14.628235 systemd[1]: Started sshd@19-145.40.90.207:22-139.178.68.195:34828.service. Feb 13 07:37:14.628635 systemd-logind[1466]: Removed session 18. Feb 13 07:37:14.662006 sshd[4604]: Accepted publickey for core from 139.178.68.195 port 34828 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:37:14.662889 sshd[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:37:14.665437 systemd-logind[1466]: New session 19 of user core. Feb 13 07:37:14.666041 systemd[1]: Started session-19.scope. Feb 13 07:37:14.906307 sshd[4604]: pam_unix(sshd:session): session closed for user core Feb 13 07:37:14.907930 systemd[1]: sshd@19-145.40.90.207:22-139.178.68.195:34828.service: Deactivated successfully. Feb 13 07:37:14.908289 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 07:37:14.908675 systemd-logind[1466]: Session 19 logged out. Waiting for processes to exit. Feb 13 07:37:14.909262 systemd[1]: Started sshd@20-145.40.90.207:22-139.178.68.195:34838.service. Feb 13 07:37:14.909660 systemd-logind[1466]: Removed session 19. Feb 13 07:37:14.943298 sshd[4632]: Accepted publickey for core from 139.178.68.195 port 34838 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:37:14.944383 sshd[4632]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:37:14.948179 systemd-logind[1466]: New session 20 of user core. Feb 13 07:37:14.949045 systemd[1]: Started session-20.scope. Feb 13 07:37:15.100038 sshd[4632]: pam_unix(sshd:session): session closed for user core Feb 13 07:37:15.101875 systemd[1]: sshd@20-145.40.90.207:22-139.178.68.195:34838.service: Deactivated successfully. Feb 13 07:37:15.102439 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 07:37:15.102970 systemd-logind[1466]: Session 20 logged out. Waiting for processes to exit. Feb 13 07:37:15.103580 systemd-logind[1466]: Removed session 20. Feb 13 07:37:20.103122 systemd[1]: Started sshd@21-145.40.90.207:22-139.178.68.195:45632.service. Feb 13 07:37:20.138145 sshd[4662]: Accepted publickey for core from 139.178.68.195 port 45632 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:37:20.139055 sshd[4662]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:37:20.142241 systemd-logind[1466]: New session 21 of user core. Feb 13 07:37:20.143054 systemd[1]: Started session-21.scope. Feb 13 07:37:20.232462 sshd[4662]: pam_unix(sshd:session): session closed for user core Feb 13 07:37:20.233857 systemd[1]: sshd@21-145.40.90.207:22-139.178.68.195:45632.service: Deactivated successfully. Feb 13 07:37:20.234301 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 07:37:20.234734 systemd-logind[1466]: Session 21 logged out. Waiting for processes to exit. Feb 13 07:37:20.235270 systemd-logind[1466]: Removed session 21. Feb 13 07:37:25.241934 systemd[1]: Started sshd@22-145.40.90.207:22-139.178.68.195:45640.service. Feb 13 07:37:25.275640 sshd[4687]: Accepted publickey for core from 139.178.68.195 port 45640 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:37:25.276635 sshd[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:37:25.280064 systemd-logind[1466]: New session 22 of user core. Feb 13 07:37:25.280846 systemd[1]: Started session-22.scope. Feb 13 07:37:25.366760 sshd[4687]: pam_unix(sshd:session): session closed for user core Feb 13 07:37:25.368221 systemd[1]: sshd@22-145.40.90.207:22-139.178.68.195:45640.service: Deactivated successfully. Feb 13 07:37:25.368685 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 07:37:25.369108 systemd-logind[1466]: Session 22 logged out. Waiting for processes to exit. Feb 13 07:37:25.369697 systemd-logind[1466]: Removed session 22. Feb 13 07:37:30.376132 systemd[1]: Started sshd@23-145.40.90.207:22-139.178.68.195:60210.service. Feb 13 07:37:30.409671 sshd[4712]: Accepted publickey for core from 139.178.68.195 port 60210 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:37:30.410687 sshd[4712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:37:30.414398 systemd-logind[1466]: New session 23 of user core. Feb 13 07:37:30.415234 systemd[1]: Started session-23.scope. Feb 13 07:37:30.505120 sshd[4712]: pam_unix(sshd:session): session closed for user core Feb 13 07:37:30.506680 systemd[1]: sshd@23-145.40.90.207:22-139.178.68.195:60210.service: Deactivated successfully. Feb 13 07:37:30.507126 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 07:37:30.507549 systemd-logind[1466]: Session 23 logged out. Waiting for processes to exit. Feb 13 07:37:30.508050 systemd-logind[1466]: Removed session 23. Feb 13 07:37:35.508083 systemd[1]: Started sshd@24-145.40.90.207:22-139.178.68.195:60212.service. Feb 13 07:37:35.543365 sshd[4735]: Accepted publickey for core from 139.178.68.195 port 60212 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:37:35.546570 sshd[4735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:37:35.556721 systemd-logind[1466]: New session 24 of user core. Feb 13 07:37:35.559317 systemd[1]: Started session-24.scope. Feb 13 07:37:35.661116 sshd[4735]: pam_unix(sshd:session): session closed for user core Feb 13 07:37:35.662810 systemd[1]: sshd@24-145.40.90.207:22-139.178.68.195:60212.service: Deactivated successfully. Feb 13 07:37:35.663172 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 07:37:35.663667 systemd-logind[1466]: Session 24 logged out. Waiting for processes to exit. Feb 13 07:37:35.664269 systemd[1]: Started sshd@25-145.40.90.207:22-139.178.68.195:60216.service. Feb 13 07:37:35.664717 systemd-logind[1466]: Removed session 24. Feb 13 07:37:35.699095 sshd[4760]: Accepted publickey for core from 139.178.68.195 port 60216 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:37:35.702417 sshd[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:37:35.712745 systemd-logind[1466]: New session 25 of user core. Feb 13 07:37:35.715678 systemd[1]: Started session-25.scope. Feb 13 07:37:37.106714 env[1478]: time="2024-02-13T07:37:37.106606719Z" level=info msg="StopContainer for \"d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372\" with timeout 30 (s)" Feb 13 07:37:37.107322 env[1478]: time="2024-02-13T07:37:37.107093851Z" level=info msg="Stop container \"d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372\" with signal terminated" Feb 13 07:37:37.118545 systemd[1]: cri-containerd-d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372.scope: Deactivated successfully. Feb 13 07:37:37.118830 systemd[1]: cri-containerd-d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372.scope: Consumed 2.399s CPU time. Feb 13 07:37:37.131050 env[1478]: time="2024-02-13T07:37:37.130884892Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 07:37:37.136937 env[1478]: time="2024-02-13T07:37:37.136895369Z" level=info msg="StopContainer for \"1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5\" with timeout 1 (s)" Feb 13 07:37:37.137174 env[1478]: time="2024-02-13T07:37:37.137140867Z" level=info msg="Stop container \"1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5\" with signal terminated" Feb 13 07:37:37.138113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372-rootfs.mount: Deactivated successfully. Feb 13 07:37:37.143093 systemd-networkd[1334]: lxc_health: Link DOWN Feb 13 07:37:37.143098 systemd-networkd[1334]: lxc_health: Lost carrier Feb 13 07:37:37.170128 env[1478]: time="2024-02-13T07:37:37.170069953Z" level=info msg="shim disconnected" id=d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372 Feb 13 07:37:37.170275 env[1478]: time="2024-02-13T07:37:37.170128767Z" level=warning msg="cleaning up after shim disconnected" id=d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372 namespace=k8s.io Feb 13 07:37:37.170275 env[1478]: time="2024-02-13T07:37:37.170146779Z" level=info msg="cleaning up dead shim" Feb 13 07:37:37.178922 env[1478]: time="2024-02-13T07:37:37.178845779Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:37:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4825 runtime=io.containerd.runc.v2\n" Feb 13 07:37:37.180660 env[1478]: time="2024-02-13T07:37:37.180571299Z" level=info msg="StopContainer for \"d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372\" returns successfully" Feb 13 07:37:37.181556 env[1478]: time="2024-02-13T07:37:37.181466796Z" level=info msg="StopPodSandbox for \"4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8\"" Feb 13 07:37:37.181711 env[1478]: time="2024-02-13T07:37:37.181574799Z" level=info msg="Container to stop \"d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:37:37.185461 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8-shm.mount: Deactivated successfully. Feb 13 07:37:37.195438 systemd[1]: cri-containerd-4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8.scope: Deactivated successfully. Feb 13 07:37:37.221784 systemd[1]: cri-containerd-1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5.scope: Deactivated successfully. Feb 13 07:37:37.222100 systemd[1]: cri-containerd-1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5.scope: Consumed 11.874s CPU time. Feb 13 07:37:37.224477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8-rootfs.mount: Deactivated successfully. Feb 13 07:37:37.224753 env[1478]: time="2024-02-13T07:37:37.224707734Z" level=info msg="shim disconnected" id=4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8 Feb 13 07:37:37.224854 env[1478]: time="2024-02-13T07:37:37.224758212Z" level=warning msg="cleaning up after shim disconnected" id=4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8 namespace=k8s.io Feb 13 07:37:37.224854 env[1478]: time="2024-02-13T07:37:37.224770053Z" level=info msg="cleaning up dead shim" Feb 13 07:37:37.230623 env[1478]: time="2024-02-13T07:37:37.230592205Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:37:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4862 runtime=io.containerd.runc.v2\n" Feb 13 07:37:37.230870 env[1478]: time="2024-02-13T07:37:37.230825115Z" level=info msg="TearDown network for sandbox \"4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8\" successfully" Feb 13 07:37:37.230870 env[1478]: time="2024-02-13T07:37:37.230844410Z" level=info msg="StopPodSandbox for \"4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8\" returns successfully" Feb 13 07:37:37.235750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5-rootfs.mount: Deactivated successfully. Feb 13 07:37:37.246821 env[1478]: time="2024-02-13T07:37:37.246756366Z" level=info msg="shim disconnected" id=1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5 Feb 13 07:37:37.246821 env[1478]: time="2024-02-13T07:37:37.246790395Z" level=warning msg="cleaning up after shim disconnected" id=1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5 namespace=k8s.io Feb 13 07:37:37.246821 env[1478]: time="2024-02-13T07:37:37.246801102Z" level=info msg="cleaning up dead shim" Feb 13 07:37:37.251513 env[1478]: time="2024-02-13T07:37:37.251460505Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:37:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4884 runtime=io.containerd.runc.v2\n" Feb 13 07:37:37.252504 env[1478]: time="2024-02-13T07:37:37.252451719Z" level=info msg="StopContainer for \"1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5\" returns successfully" Feb 13 07:37:37.252824 env[1478]: time="2024-02-13T07:37:37.252775206Z" level=info msg="StopPodSandbox for \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\"" Feb 13 07:37:37.252875 env[1478]: time="2024-02-13T07:37:37.252821979Z" level=info msg="Container to stop \"0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:37:37.252875 env[1478]: time="2024-02-13T07:37:37.252841272Z" level=info msg="Container to stop \"3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:37:37.252875 env[1478]: time="2024-02-13T07:37:37.252849685Z" level=info msg="Container to stop \"1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:37:37.252875 env[1478]: time="2024-02-13T07:37:37.252858014Z" level=info msg="Container to stop \"5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:37:37.252875 env[1478]: time="2024-02-13T07:37:37.252865863Z" level=info msg="Container to stop \"e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:37:37.256760 systemd[1]: cri-containerd-2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21.scope: Deactivated successfully. Feb 13 07:37:37.290112 env[1478]: time="2024-02-13T07:37:37.290014433Z" level=info msg="shim disconnected" id=2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21 Feb 13 07:37:37.290112 env[1478]: time="2024-02-13T07:37:37.290089146Z" level=warning msg="cleaning up after shim disconnected" id=2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21 namespace=k8s.io Feb 13 07:37:37.290112 env[1478]: time="2024-02-13T07:37:37.290106436Z" level=info msg="cleaning up dead shim" Feb 13 07:37:37.298805 env[1478]: time="2024-02-13T07:37:37.298727819Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:37:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4915 runtime=io.containerd.runc.v2\n" Feb 13 07:37:37.299145 env[1478]: time="2024-02-13T07:37:37.299103151Z" level=info msg="TearDown network for sandbox \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\" successfully" Feb 13 07:37:37.299145 env[1478]: time="2024-02-13T07:37:37.299140531Z" level=info msg="StopPodSandbox for \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\" returns successfully" Feb 13 07:37:37.329538 kubelet[2556]: I0213 07:37:37.329449 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24c56616-2415-4d16-9d03-0d7c06962ec2-cilium-config-path\") pod \"24c56616-2415-4d16-9d03-0d7c06962ec2\" (UID: \"24c56616-2415-4d16-9d03-0d7c06962ec2\") " Feb 13 07:37:37.330250 kubelet[2556]: I0213 07:37:37.329565 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpgdn\" (UniqueName: \"kubernetes.io/projected/24c56616-2415-4d16-9d03-0d7c06962ec2-kube-api-access-lpgdn\") pod \"24c56616-2415-4d16-9d03-0d7c06962ec2\" (UID: \"24c56616-2415-4d16-9d03-0d7c06962ec2\") " Feb 13 07:37:37.330250 kubelet[2556]: W0213 07:37:37.329835 2556 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/24c56616-2415-4d16-9d03-0d7c06962ec2/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 13 07:37:37.334289 kubelet[2556]: I0213 07:37:37.334197 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24c56616-2415-4d16-9d03-0d7c06962ec2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "24c56616-2415-4d16-9d03-0d7c06962ec2" (UID: "24c56616-2415-4d16-9d03-0d7c06962ec2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 07:37:37.335504 kubelet[2556]: I0213 07:37:37.335367 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24c56616-2415-4d16-9d03-0d7c06962ec2-kube-api-access-lpgdn" (OuterVolumeSpecName: "kube-api-access-lpgdn") pod "24c56616-2415-4d16-9d03-0d7c06962ec2" (UID: "24c56616-2415-4d16-9d03-0d7c06962ec2"). InnerVolumeSpecName "kube-api-access-lpgdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 07:37:37.364987 kubelet[2556]: I0213 07:37:37.364797 2556 scope.go:115] "RemoveContainer" containerID="1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5" Feb 13 07:37:37.367409 env[1478]: time="2024-02-13T07:37:37.367323565Z" level=info msg="RemoveContainer for \"1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5\"" Feb 13 07:37:37.371961 env[1478]: time="2024-02-13T07:37:37.371898086Z" level=info msg="RemoveContainer for \"1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5\" returns successfully" Feb 13 07:37:37.372319 kubelet[2556]: I0213 07:37:37.372281 2556 scope.go:115] "RemoveContainer" containerID="5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba" Feb 13 07:37:37.374750 env[1478]: time="2024-02-13T07:37:37.374644261Z" level=info msg="RemoveContainer for \"5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba\"" Feb 13 07:37:37.377250 systemd[1]: Removed slice kubepods-besteffort-pod24c56616_2415_4d16_9d03_0d7c06962ec2.slice. Feb 13 07:37:37.377647 systemd[1]: kubepods-besteffort-pod24c56616_2415_4d16_9d03_0d7c06962ec2.slice: Consumed 2.433s CPU time. Feb 13 07:37:37.380965 env[1478]: time="2024-02-13T07:37:37.380887846Z" level=info msg="RemoveContainer for \"5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba\" returns successfully" Feb 13 07:37:37.381347 kubelet[2556]: I0213 07:37:37.381294 2556 scope.go:115] "RemoveContainer" containerID="3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b" Feb 13 07:37:37.383802 env[1478]: time="2024-02-13T07:37:37.383706908Z" level=info msg="RemoveContainer for \"3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b\"" Feb 13 07:37:37.387260 env[1478]: time="2024-02-13T07:37:37.387201087Z" level=info msg="RemoveContainer for \"3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b\" returns successfully" Feb 13 07:37:37.387786 kubelet[2556]: I0213 07:37:37.387574 2556 scope.go:115] "RemoveContainer" containerID="0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a" Feb 13 07:37:37.390043 env[1478]: time="2024-02-13T07:37:37.389945638Z" level=info msg="RemoveContainer for \"0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a\"" Feb 13 07:37:37.393755 env[1478]: time="2024-02-13T07:37:37.393692827Z" level=info msg="RemoveContainer for \"0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a\" returns successfully" Feb 13 07:37:37.394023 kubelet[2556]: I0213 07:37:37.393982 2556 scope.go:115] "RemoveContainer" containerID="e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4" Feb 13 07:37:37.396145 env[1478]: time="2024-02-13T07:37:37.396055192Z" level=info msg="RemoveContainer for \"e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4\"" Feb 13 07:37:37.399571 env[1478]: time="2024-02-13T07:37:37.399472563Z" level=info msg="RemoveContainer for \"e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4\" returns successfully" Feb 13 07:37:37.399889 kubelet[2556]: I0213 07:37:37.399815 2556 scope.go:115] "RemoveContainer" containerID="1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5" Feb 13 07:37:37.400425 env[1478]: time="2024-02-13T07:37:37.400210866Z" level=error msg="ContainerStatus for \"1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5\": not found" Feb 13 07:37:37.400807 kubelet[2556]: E0213 07:37:37.400727 2556 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5\": not found" containerID="1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5" Feb 13 07:37:37.400997 kubelet[2556]: I0213 07:37:37.400815 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5} err="failed to get container status \"1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"1875d06cf68fae4ebfc3cf17f10b591f8b63cbe6a5cc58a1819af96b9ef921d5\": not found" Feb 13 07:37:37.400997 kubelet[2556]: I0213 07:37:37.400848 2556 scope.go:115] "RemoveContainer" containerID="5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba" Feb 13 07:37:37.401498 env[1478]: time="2024-02-13T07:37:37.401284631Z" level=error msg="ContainerStatus for \"5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba\": not found" Feb 13 07:37:37.401792 kubelet[2556]: E0213 07:37:37.401714 2556 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba\": not found" containerID="5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba" Feb 13 07:37:37.401792 kubelet[2556]: I0213 07:37:37.401784 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba} err="failed to get container status \"5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d7401ebbb299163192ee3ce114c3e3d45d03d8ecd4d2d261e93f844501a1dba\": not found" Feb 13 07:37:37.402071 kubelet[2556]: I0213 07:37:37.401822 2556 scope.go:115] "RemoveContainer" containerID="3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b" Feb 13 07:37:37.402301 env[1478]: time="2024-02-13T07:37:37.402174359Z" level=error msg="ContainerStatus for \"3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b\": not found" Feb 13 07:37:37.402600 kubelet[2556]: E0213 07:37:37.402522 2556 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b\": not found" containerID="3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b" Feb 13 07:37:37.402600 kubelet[2556]: I0213 07:37:37.402590 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b} err="failed to get container status \"3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b\": rpc error: code = NotFound desc = an error occurred when try to find container \"3a0a1d44d4fd14f3abf34dc6b369b7d12df43f97de3e45ce0632e7b90eb1a27b\": not found" Feb 13 07:37:37.402886 kubelet[2556]: I0213 07:37:37.402617 2556 scope.go:115] "RemoveContainer" containerID="0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a" Feb 13 07:37:37.403077 env[1478]: time="2024-02-13T07:37:37.402950894Z" level=error msg="ContainerStatus for \"0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a\": not found" Feb 13 07:37:37.403277 kubelet[2556]: E0213 07:37:37.403249 2556 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a\": not found" containerID="0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a" Feb 13 07:37:37.403429 kubelet[2556]: I0213 07:37:37.403314 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a} err="failed to get container status \"0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d681d3292261ba85b01f909fb8538df65a320759871f19c716a7075bbf01e1a\": not found" Feb 13 07:37:37.403429 kubelet[2556]: I0213 07:37:37.403346 2556 scope.go:115] "RemoveContainer" containerID="e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4" Feb 13 07:37:37.403838 env[1478]: time="2024-02-13T07:37:37.403718876Z" level=error msg="ContainerStatus for \"e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4\": not found" Feb 13 07:37:37.404188 kubelet[2556]: E0213 07:37:37.404123 2556 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4\": not found" containerID="e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4" Feb 13 07:37:37.404316 kubelet[2556]: I0213 07:37:37.404203 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4} err="failed to get container status \"e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4\": rpc error: code = NotFound desc = an error occurred when try to find container \"e19363d5fed40f044e0dd4df077c0485504f25254ef4d6a1b0bd4239ede16fa4\": not found" Feb 13 07:37:37.404316 kubelet[2556]: I0213 07:37:37.404233 2556 scope.go:115] "RemoveContainer" containerID="d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372" Feb 13 07:37:37.406542 env[1478]: time="2024-02-13T07:37:37.406440808Z" level=info msg="RemoveContainer for \"d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372\"" Feb 13 07:37:37.409902 env[1478]: time="2024-02-13T07:37:37.409837487Z" level=info msg="RemoveContainer for \"d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372\" returns successfully" Feb 13 07:37:37.410207 kubelet[2556]: I0213 07:37:37.410154 2556 scope.go:115] "RemoveContainer" containerID="d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372" Feb 13 07:37:37.410775 env[1478]: time="2024-02-13T07:37:37.410600238Z" level=error msg="ContainerStatus for \"d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372\": not found" Feb 13 07:37:37.411006 kubelet[2556]: E0213 07:37:37.410971 2556 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372\": not found" containerID="d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372" Feb 13 07:37:37.411207 kubelet[2556]: I0213 07:37:37.411037 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372} err="failed to get container status \"d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372\": rpc error: code = NotFound desc = an error occurred when try to find container \"d868087014cd38c81dc706e302814edb00257c8c8b719e9d6c96c8fdc3f4b372\": not found" Feb 13 07:37:37.430479 kubelet[2556]: I0213 07:37:37.430378 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-cilium-run\") pod \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " Feb 13 07:37:37.430685 kubelet[2556]: I0213 07:37:37.430500 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-bpf-maps\") pod \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " Feb 13 07:37:37.430685 kubelet[2556]: I0213 07:37:37.430563 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-host-proc-sys-kernel\") pod \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " Feb 13 07:37:37.430685 kubelet[2556]: I0213 07:37:37.430492 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" (UID: "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:37.430685 kubelet[2556]: I0213 07:37:37.430619 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-xtables-lock\") pod \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " Feb 13 07:37:37.430685 kubelet[2556]: I0213 07:37:37.430608 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" (UID: "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:37.431191 kubelet[2556]: I0213 07:37:37.430700 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-cilium-config-path\") pod \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " Feb 13 07:37:37.431191 kubelet[2556]: I0213 07:37:37.430714 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" (UID: "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:37.431191 kubelet[2556]: I0213 07:37:37.430752 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-cilium-cgroup\") pod \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " Feb 13 07:37:37.431191 kubelet[2556]: I0213 07:37:37.430688 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" (UID: "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:37.431191 kubelet[2556]: I0213 07:37:37.430818 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-host-proc-sys-net\") pod \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " Feb 13 07:37:37.431784 kubelet[2556]: I0213 07:37:37.430842 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" (UID: "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:37.431784 kubelet[2556]: I0213 07:37:37.430861 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" (UID: "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:37.431784 kubelet[2556]: I0213 07:37:37.430873 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-hostproc\") pod \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " Feb 13 07:37:37.431784 kubelet[2556]: I0213 07:37:37.430913 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-hostproc" (OuterVolumeSpecName: "hostproc") pod "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" (UID: "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:37.431784 kubelet[2556]: I0213 07:37:37.430991 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-cni-path\") pod \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " Feb 13 07:37:37.432315 kubelet[2556]: I0213 07:37:37.431057 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-etc-cni-netd\") pod \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " Feb 13 07:37:37.432315 kubelet[2556]: I0213 07:37:37.431069 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-cni-path" (OuterVolumeSpecName: "cni-path") pod "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" (UID: "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:37.432315 kubelet[2556]: I0213 07:37:37.431121 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-lib-modules\") pod \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " Feb 13 07:37:37.432315 kubelet[2556]: W0213 07:37:37.431123 2556 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 13 07:37:37.432315 kubelet[2556]: I0213 07:37:37.431154 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" (UID: "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:37.432315 kubelet[2556]: I0213 07:37:37.431193 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" (UID: "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:37.432969 kubelet[2556]: I0213 07:37:37.431204 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-hubble-tls\") pod \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " Feb 13 07:37:37.432969 kubelet[2556]: I0213 07:37:37.431309 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpxl5\" (UniqueName: \"kubernetes.io/projected/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-kube-api-access-vpxl5\") pod \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " Feb 13 07:37:37.432969 kubelet[2556]: I0213 07:37:37.431377 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-clustermesh-secrets\") pod \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\" (UID: \"f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55\") " Feb 13 07:37:37.432969 kubelet[2556]: I0213 07:37:37.431477 2556 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-lib-modules\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:37.432969 kubelet[2556]: I0213 07:37:37.431517 2556 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/24c56616-2415-4d16-9d03-0d7c06962ec2-cilium-config-path\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:37.432969 kubelet[2556]: I0213 07:37:37.431547 2556 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-cni-path\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:37.432969 kubelet[2556]: I0213 07:37:37.431576 2556 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-etc-cni-netd\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:37.433819 kubelet[2556]: I0213 07:37:37.431627 2556 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:37.433819 kubelet[2556]: I0213 07:37:37.431661 2556 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-xtables-lock\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:37.433819 kubelet[2556]: I0213 07:37:37.431694 2556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lpgdn\" (UniqueName: \"kubernetes.io/projected/24c56616-2415-4d16-9d03-0d7c06962ec2-kube-api-access-lpgdn\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:37.433819 kubelet[2556]: I0213 07:37:37.431725 2556 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-cilium-run\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:37.433819 kubelet[2556]: I0213 07:37:37.431755 2556 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-bpf-maps\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:37.433819 kubelet[2556]: I0213 07:37:37.431785 2556 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-cilium-cgroup\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:37.433819 kubelet[2556]: I0213 07:37:37.431817 2556 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-host-proc-sys-net\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:37.433819 kubelet[2556]: I0213 07:37:37.431849 2556 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-hostproc\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:37.436931 kubelet[2556]: I0213 07:37:37.436830 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" (UID: "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 07:37:37.438330 kubelet[2556]: I0213 07:37:37.438234 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" (UID: "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 07:37:37.438582 kubelet[2556]: I0213 07:37:37.438380 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" (UID: "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 07:37:37.438582 kubelet[2556]: I0213 07:37:37.438441 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-kube-api-access-vpxl5" (OuterVolumeSpecName: "kube-api-access-vpxl5") pod "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" (UID: "f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55"). InnerVolumeSpecName "kube-api-access-vpxl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 07:37:37.497910 kubelet[2556]: I0213 07:37:37.497814 2556 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=24c56616-2415-4d16-9d03-0d7c06962ec2 path="/var/lib/kubelet/pods/24c56616-2415-4d16-9d03-0d7c06962ec2/volumes" Feb 13 07:37:37.507337 systemd[1]: Removed slice kubepods-burstable-podf4bcaa9a_7aca_4cd2_b879_af34c9ad2c55.slice. Feb 13 07:37:37.507635 systemd[1]: kubepods-burstable-podf4bcaa9a_7aca_4cd2_b879_af34c9ad2c55.slice: Consumed 11.956s CPU time. Feb 13 07:37:37.532985 kubelet[2556]: I0213 07:37:37.532919 2556 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-hubble-tls\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:37.532985 kubelet[2556]: I0213 07:37:37.532998 2556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vpxl5\" (UniqueName: \"kubernetes.io/projected/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-kube-api-access-vpxl5\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:37.533385 kubelet[2556]: I0213 07:37:37.533037 2556 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-clustermesh-secrets\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:37.533385 kubelet[2556]: I0213 07:37:37.533073 2556 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55-cilium-config-path\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:38.116474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21-rootfs.mount: Deactivated successfully. Feb 13 07:37:38.116555 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21-shm.mount: Deactivated successfully. Feb 13 07:37:38.116612 systemd[1]: var-lib-kubelet-pods-24c56616\x2d2415\x2d4d16\x2d9d03\x2d0d7c06962ec2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlpgdn.mount: Deactivated successfully. Feb 13 07:37:38.116672 systemd[1]: var-lib-kubelet-pods-f4bcaa9a\x2d7aca\x2d4cd2\x2db879\x2daf34c9ad2c55-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvpxl5.mount: Deactivated successfully. Feb 13 07:37:38.116725 systemd[1]: var-lib-kubelet-pods-f4bcaa9a\x2d7aca\x2d4cd2\x2db879\x2daf34c9ad2c55-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 07:37:38.116778 systemd[1]: var-lib-kubelet-pods-f4bcaa9a\x2d7aca\x2d4cd2\x2db879\x2daf34c9ad2c55-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 07:37:39.044496 sshd[4760]: pam_unix(sshd:session): session closed for user core Feb 13 07:37:39.051487 systemd[1]: sshd@25-145.40.90.207:22-139.178.68.195:60216.service: Deactivated successfully. Feb 13 07:37:39.052176 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 07:37:39.052694 systemd-logind[1466]: Session 25 logged out. Waiting for processes to exit. Feb 13 07:37:39.053284 systemd[1]: Started sshd@26-145.40.90.207:22-139.178.68.195:52244.service. Feb 13 07:37:39.053960 systemd-logind[1466]: Removed session 25. Feb 13 07:37:39.087077 sshd[4933]: Accepted publickey for core from 139.178.68.195 port 52244 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:37:39.088251 sshd[4933]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:37:39.092094 systemd-logind[1466]: New session 26 of user core. Feb 13 07:37:39.092950 systemd[1]: Started session-26.scope. Feb 13 07:37:39.470488 sshd[4933]: pam_unix(sshd:session): session closed for user core Feb 13 07:37:39.475731 systemd[1]: sshd@26-145.40.90.207:22-139.178.68.195:52244.service: Deactivated successfully. Feb 13 07:37:39.476450 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 07:37:39.476916 systemd-logind[1466]: Session 26 logged out. Waiting for processes to exit. Feb 13 07:37:39.478086 systemd[1]: Started sshd@27-145.40.90.207:22-139.178.68.195:52254.service. Feb 13 07:37:39.478983 systemd-logind[1466]: Removed session 26. Feb 13 07:37:39.481258 kubelet[2556]: I0213 07:37:39.481232 2556 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:37:39.481575 kubelet[2556]: E0213 07:37:39.481309 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="24c56616-2415-4d16-9d03-0d7c06962ec2" containerName="cilium-operator" Feb 13 07:37:39.481575 kubelet[2556]: E0213 07:37:39.481318 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" containerName="cilium-agent" Feb 13 07:37:39.481575 kubelet[2556]: E0213 07:37:39.481324 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" containerName="mount-cgroup" Feb 13 07:37:39.481575 kubelet[2556]: E0213 07:37:39.481329 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" containerName="apply-sysctl-overwrites" Feb 13 07:37:39.481575 kubelet[2556]: E0213 07:37:39.481332 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" containerName="mount-bpf-fs" Feb 13 07:37:39.481575 kubelet[2556]: E0213 07:37:39.481336 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" containerName="clean-cilium-state" Feb 13 07:37:39.481575 kubelet[2556]: I0213 07:37:39.481353 2556 memory_manager.go:346] "RemoveStaleState removing state" podUID="24c56616-2415-4d16-9d03-0d7c06962ec2" containerName="cilium-operator" Feb 13 07:37:39.481575 kubelet[2556]: I0213 07:37:39.481357 2556 memory_manager.go:346] "RemoveStaleState removing state" podUID="f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55" containerName="cilium-agent" Feb 13 07:37:39.485212 systemd[1]: Created slice kubepods-burstable-pod10bb01a8_b339_44ec_b7ec_d578044de77a.slice. Feb 13 07:37:39.491916 kubelet[2556]: I0213 07:37:39.491899 2556 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55 path="/var/lib/kubelet/pods/f4bcaa9a-7aca-4cd2-b879-af34c9ad2c55/volumes" Feb 13 07:37:39.516358 sshd[4956]: Accepted publickey for core from 139.178.68.195 port 52254 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:37:39.519808 sshd[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:37:39.529254 systemd-logind[1466]: New session 27 of user core. Feb 13 07:37:39.531804 systemd[1]: Started session-27.scope. Feb 13 07:37:39.547613 kubelet[2556]: I0213 07:37:39.547529 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10bb01a8-b339-44ec-b7ec-d578044de77a-clustermesh-secrets\") pod \"cilium-7wqx9\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " pod="kube-system/cilium-7wqx9" Feb 13 07:37:39.547822 kubelet[2556]: I0213 07:37:39.547721 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-cilium-cgroup\") pod \"cilium-7wqx9\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " pod="kube-system/cilium-7wqx9" Feb 13 07:37:39.547822 kubelet[2556]: I0213 07:37:39.547812 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-etc-cni-netd\") pod \"cilium-7wqx9\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " pod="kube-system/cilium-7wqx9" Feb 13 07:37:39.548148 kubelet[2556]: I0213 07:37:39.547942 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-xtables-lock\") pod \"cilium-7wqx9\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " pod="kube-system/cilium-7wqx9" Feb 13 07:37:39.548148 kubelet[2556]: I0213 07:37:39.548038 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/10bb01a8-b339-44ec-b7ec-d578044de77a-cilium-ipsec-secrets\") pod \"cilium-7wqx9\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " pod="kube-system/cilium-7wqx9" Feb 13 07:37:39.548484 kubelet[2556]: I0213 07:37:39.548158 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-bpf-maps\") pod \"cilium-7wqx9\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " pod="kube-system/cilium-7wqx9" Feb 13 07:37:39.548484 kubelet[2556]: I0213 07:37:39.548215 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-hostproc\") pod \"cilium-7wqx9\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " pod="kube-system/cilium-7wqx9" Feb 13 07:37:39.548484 kubelet[2556]: I0213 07:37:39.548339 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-cni-path\") pod \"cilium-7wqx9\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " pod="kube-system/cilium-7wqx9" Feb 13 07:37:39.548484 kubelet[2556]: I0213 07:37:39.548470 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-lib-modules\") pod \"cilium-7wqx9\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " pod="kube-system/cilium-7wqx9" Feb 13 07:37:39.548881 kubelet[2556]: I0213 07:37:39.548606 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10bb01a8-b339-44ec-b7ec-d578044de77a-cilium-config-path\") pod \"cilium-7wqx9\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " pod="kube-system/cilium-7wqx9" Feb 13 07:37:39.548881 kubelet[2556]: I0213 07:37:39.548730 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-host-proc-sys-net\") pod \"cilium-7wqx9\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " pod="kube-system/cilium-7wqx9" Feb 13 07:37:39.549063 kubelet[2556]: I0213 07:37:39.548902 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-host-proc-sys-kernel\") pod \"cilium-7wqx9\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " pod="kube-system/cilium-7wqx9" Feb 13 07:37:39.549063 kubelet[2556]: I0213 07:37:39.548979 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ckvq\" (UniqueName: \"kubernetes.io/projected/10bb01a8-b339-44ec-b7ec-d578044de77a-kube-api-access-8ckvq\") pod \"cilium-7wqx9\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " pod="kube-system/cilium-7wqx9" Feb 13 07:37:39.549234 kubelet[2556]: I0213 07:37:39.549127 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-cilium-run\") pod \"cilium-7wqx9\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " pod="kube-system/cilium-7wqx9" Feb 13 07:37:39.549234 kubelet[2556]: I0213 07:37:39.549220 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10bb01a8-b339-44ec-b7ec-d578044de77a-hubble-tls\") pod \"cilium-7wqx9\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " pod="kube-system/cilium-7wqx9" Feb 13 07:37:39.684763 sshd[4956]: pam_unix(sshd:session): session closed for user core Feb 13 07:37:39.686522 systemd[1]: sshd@27-145.40.90.207:22-139.178.68.195:52254.service: Deactivated successfully. Feb 13 07:37:39.686894 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 07:37:39.687272 systemd-logind[1466]: Session 27 logged out. Waiting for processes to exit. Feb 13 07:37:39.688020 systemd[1]: Started sshd@28-145.40.90.207:22-139.178.68.195:52260.service. Feb 13 07:37:39.688448 systemd-logind[1466]: Removed session 27. Feb 13 07:37:39.724096 sshd[4987]: Accepted publickey for core from 139.178.68.195 port 52260 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:37:39.727264 sshd[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:37:39.737888 systemd-logind[1466]: New session 28 of user core. Feb 13 07:37:39.741006 systemd[1]: Started session-28.scope. Feb 13 07:37:39.789015 env[1478]: time="2024-02-13T07:37:39.788880649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7wqx9,Uid:10bb01a8-b339-44ec-b7ec-d578044de77a,Namespace:kube-system,Attempt:0,}" Feb 13 07:37:39.811665 env[1478]: time="2024-02-13T07:37:39.811428688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:37:39.811665 env[1478]: time="2024-02-13T07:37:39.811547320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:37:39.811665 env[1478]: time="2024-02-13T07:37:39.811600251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:37:39.812202 env[1478]: time="2024-02-13T07:37:39.812020680Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1 pid=4999 runtime=io.containerd.runc.v2 Feb 13 07:37:39.836272 systemd[1]: Started cri-containerd-7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1.scope. Feb 13 07:37:39.857162 env[1478]: time="2024-02-13T07:37:39.857122119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7wqx9,Uid:10bb01a8-b339-44ec-b7ec-d578044de77a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1\"" Feb 13 07:37:39.858844 env[1478]: time="2024-02-13T07:37:39.858793625Z" level=info msg="CreateContainer within sandbox \"7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 07:37:39.865119 env[1478]: time="2024-02-13T07:37:39.865026521Z" level=info msg="CreateContainer within sandbox \"7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1033f832a1b87f66adb02b723329c269c7c5a3d3bb6ff8c71763c223c9a00345\"" Feb 13 07:37:39.865456 env[1478]: time="2024-02-13T07:37:39.865408757Z" level=info msg="StartContainer for \"1033f832a1b87f66adb02b723329c269c7c5a3d3bb6ff8c71763c223c9a00345\"" Feb 13 07:37:39.874208 systemd[1]: Started cri-containerd-1033f832a1b87f66adb02b723329c269c7c5a3d3bb6ff8c71763c223c9a00345.scope. Feb 13 07:37:39.881091 systemd[1]: cri-containerd-1033f832a1b87f66adb02b723329c269c7c5a3d3bb6ff8c71763c223c9a00345.scope: Deactivated successfully. Feb 13 07:37:39.881262 systemd[1]: Stopped cri-containerd-1033f832a1b87f66adb02b723329c269c7c5a3d3bb6ff8c71763c223c9a00345.scope. Feb 13 07:37:39.889222 env[1478]: time="2024-02-13T07:37:39.889192327Z" level=info msg="shim disconnected" id=1033f832a1b87f66adb02b723329c269c7c5a3d3bb6ff8c71763c223c9a00345 Feb 13 07:37:39.889222 env[1478]: time="2024-02-13T07:37:39.889222178Z" level=warning msg="cleaning up after shim disconnected" id=1033f832a1b87f66adb02b723329c269c7c5a3d3bb6ff8c71763c223c9a00345 namespace=k8s.io Feb 13 07:37:39.889329 env[1478]: time="2024-02-13T07:37:39.889228376Z" level=info msg="cleaning up dead shim" Feb 13 07:37:39.892756 env[1478]: time="2024-02-13T07:37:39.892735276Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:37:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5076 runtime=io.containerd.runc.v2\ntime=\"2024-02-13T07:37:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1033f832a1b87f66adb02b723329c269c7c5a3d3bb6ff8c71763c223c9a00345/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 13 07:37:39.892945 env[1478]: time="2024-02-13T07:37:39.892862767Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" Feb 13 07:37:39.893040 env[1478]: time="2024-02-13T07:37:39.892987984Z" level=error msg="Failed to pipe stdout of container \"1033f832a1b87f66adb02b723329c269c7c5a3d3bb6ff8c71763c223c9a00345\"" error="reading from a closed fifo" Feb 13 07:37:39.893040 env[1478]: time="2024-02-13T07:37:39.892999296Z" level=error msg="Failed to pipe stderr of container \"1033f832a1b87f66adb02b723329c269c7c5a3d3bb6ff8c71763c223c9a00345\"" error="reading from a closed fifo" Feb 13 07:37:39.893667 env[1478]: time="2024-02-13T07:37:39.893616139Z" level=error msg="StartContainer for \"1033f832a1b87f66adb02b723329c269c7c5a3d3bb6ff8c71763c223c9a00345\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 13 07:37:39.893813 kubelet[2556]: E0213 07:37:39.893766 2556 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1033f832a1b87f66adb02b723329c269c7c5a3d3bb6ff8c71763c223c9a00345" Feb 13 07:37:39.893869 kubelet[2556]: E0213 07:37:39.893845 2556 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 13 07:37:39.893869 kubelet[2556]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 13 07:37:39.893869 kubelet[2556]: rm /hostbin/cilium-mount Feb 13 07:37:39.893928 kubelet[2556]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8ckvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-7wqx9_kube-system(10bb01a8-b339-44ec-b7ec-d578044de77a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 13 07:37:39.893928 kubelet[2556]: E0213 07:37:39.893869 2556 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7wqx9" podUID=10bb01a8-b339-44ec-b7ec-d578044de77a Feb 13 07:37:40.382723 env[1478]: time="2024-02-13T07:37:40.382629816Z" level=info msg="StopPodSandbox for \"7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1\"" Feb 13 07:37:40.383019 env[1478]: time="2024-02-13T07:37:40.382770455Z" level=info msg="Container to stop \"1033f832a1b87f66adb02b723329c269c7c5a3d3bb6ff8c71763c223c9a00345\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:37:40.391942 systemd[1]: cri-containerd-7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1.scope: Deactivated successfully. Feb 13 07:37:40.418058 env[1478]: time="2024-02-13T07:37:40.417993238Z" level=info msg="shim disconnected" id=7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1 Feb 13 07:37:40.418058 env[1478]: time="2024-02-13T07:37:40.418028405Z" level=warning msg="cleaning up after shim disconnected" id=7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1 namespace=k8s.io Feb 13 07:37:40.418058 env[1478]: time="2024-02-13T07:37:40.418037246Z" level=info msg="cleaning up dead shim" Feb 13 07:37:40.422652 env[1478]: time="2024-02-13T07:37:40.422596401Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:37:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5105 runtime=io.containerd.runc.v2\n" Feb 13 07:37:40.422840 env[1478]: time="2024-02-13T07:37:40.422791016Z" level=info msg="TearDown network for sandbox \"7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1\" successfully" Feb 13 07:37:40.422840 env[1478]: time="2024-02-13T07:37:40.422811445Z" level=info msg="StopPodSandbox for \"7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1\" returns successfully" Feb 13 07:37:40.557904 kubelet[2556]: I0213 07:37:40.557799 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-cilium-run\") pod \"10bb01a8-b339-44ec-b7ec-d578044de77a\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " Feb 13 07:37:40.557904 kubelet[2556]: I0213 07:37:40.557900 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-xtables-lock\") pod \"10bb01a8-b339-44ec-b7ec-d578044de77a\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " Feb 13 07:37:40.559186 kubelet[2556]: I0213 07:37:40.557959 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-bpf-maps\") pod \"10bb01a8-b339-44ec-b7ec-d578044de77a\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " Feb 13 07:37:40.559186 kubelet[2556]: I0213 07:37:40.557946 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "10bb01a8-b339-44ec-b7ec-d578044de77a" (UID: "10bb01a8-b339-44ec-b7ec-d578044de77a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.559186 kubelet[2556]: I0213 07:37:40.558022 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-cni-path\") pod \"10bb01a8-b339-44ec-b7ec-d578044de77a\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " Feb 13 07:37:40.559186 kubelet[2556]: I0213 07:37:40.558010 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "10bb01a8-b339-44ec-b7ec-d578044de77a" (UID: "10bb01a8-b339-44ec-b7ec-d578044de77a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.559186 kubelet[2556]: I0213 07:37:40.558077 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-lib-modules\") pod \"10bb01a8-b339-44ec-b7ec-d578044de77a\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " Feb 13 07:37:40.559186 kubelet[2556]: I0213 07:37:40.558086 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "10bb01a8-b339-44ec-b7ec-d578044de77a" (UID: "10bb01a8-b339-44ec-b7ec-d578044de77a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.559186 kubelet[2556]: I0213 07:37:40.558150 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10bb01a8-b339-44ec-b7ec-d578044de77a-cilium-config-path\") pod \"10bb01a8-b339-44ec-b7ec-d578044de77a\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " Feb 13 07:37:40.559186 kubelet[2556]: I0213 07:37:40.558130 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-cni-path" (OuterVolumeSpecName: "cni-path") pod "10bb01a8-b339-44ec-b7ec-d578044de77a" (UID: "10bb01a8-b339-44ec-b7ec-d578044de77a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.559186 kubelet[2556]: I0213 07:37:40.558183 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "10bb01a8-b339-44ec-b7ec-d578044de77a" (UID: "10bb01a8-b339-44ec-b7ec-d578044de77a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.559186 kubelet[2556]: I0213 07:37:40.558212 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-etc-cni-netd\") pod \"10bb01a8-b339-44ec-b7ec-d578044de77a\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " Feb 13 07:37:40.559186 kubelet[2556]: I0213 07:37:40.558256 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "10bb01a8-b339-44ec-b7ec-d578044de77a" (UID: "10bb01a8-b339-44ec-b7ec-d578044de77a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.559186 kubelet[2556]: I0213 07:37:40.558324 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-host-proc-sys-net\") pod \"10bb01a8-b339-44ec-b7ec-d578044de77a\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " Feb 13 07:37:40.559186 kubelet[2556]: I0213 07:37:40.558437 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10bb01a8-b339-44ec-b7ec-d578044de77a-hubble-tls\") pod \"10bb01a8-b339-44ec-b7ec-d578044de77a\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " Feb 13 07:37:40.559186 kubelet[2556]: I0213 07:37:40.558461 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "10bb01a8-b339-44ec-b7ec-d578044de77a" (UID: "10bb01a8-b339-44ec-b7ec-d578044de77a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.559186 kubelet[2556]: I0213 07:37:40.558540 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10bb01a8-b339-44ec-b7ec-d578044de77a-clustermesh-secrets\") pod \"10bb01a8-b339-44ec-b7ec-d578044de77a\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " Feb 13 07:37:40.561894 kubelet[2556]: W0213 07:37:40.558551 2556 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/10bb01a8-b339-44ec-b7ec-d578044de77a/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 13 07:37:40.561894 kubelet[2556]: I0213 07:37:40.558622 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-hostproc\") pod \"10bb01a8-b339-44ec-b7ec-d578044de77a\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " Feb 13 07:37:40.561894 kubelet[2556]: I0213 07:37:40.558732 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-cilium-cgroup\") pod \"10bb01a8-b339-44ec-b7ec-d578044de77a\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " Feb 13 07:37:40.561894 kubelet[2556]: I0213 07:37:40.558728 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-hostproc" (OuterVolumeSpecName: "hostproc") pod "10bb01a8-b339-44ec-b7ec-d578044de77a" (UID: "10bb01a8-b339-44ec-b7ec-d578044de77a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.561894 kubelet[2556]: I0213 07:37:40.559289 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/10bb01a8-b339-44ec-b7ec-d578044de77a-cilium-ipsec-secrets\") pod \"10bb01a8-b339-44ec-b7ec-d578044de77a\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " Feb 13 07:37:40.561894 kubelet[2556]: I0213 07:37:40.559463 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-host-proc-sys-kernel\") pod \"10bb01a8-b339-44ec-b7ec-d578044de77a\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " Feb 13 07:37:40.561894 kubelet[2556]: I0213 07:37:40.559577 2556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ckvq\" (UniqueName: \"kubernetes.io/projected/10bb01a8-b339-44ec-b7ec-d578044de77a-kube-api-access-8ckvq\") pod \"10bb01a8-b339-44ec-b7ec-d578044de77a\" (UID: \"10bb01a8-b339-44ec-b7ec-d578044de77a\") " Feb 13 07:37:40.561894 kubelet[2556]: I0213 07:37:40.559805 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "10bb01a8-b339-44ec-b7ec-d578044de77a" (UID: "10bb01a8-b339-44ec-b7ec-d578044de77a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.561894 kubelet[2556]: I0213 07:37:40.559858 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "10bb01a8-b339-44ec-b7ec-d578044de77a" (UID: "10bb01a8-b339-44ec-b7ec-d578044de77a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.561894 kubelet[2556]: I0213 07:37:40.560240 2556 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-etc-cni-netd\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:40.561894 kubelet[2556]: I0213 07:37:40.560362 2556 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-host-proc-sys-net\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:40.561894 kubelet[2556]: I0213 07:37:40.560463 2556 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-hostproc\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:40.561894 kubelet[2556]: I0213 07:37:40.560535 2556 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-cilium-cgroup\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:40.561894 kubelet[2556]: I0213 07:37:40.560619 2556 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:40.561894 kubelet[2556]: I0213 07:37:40.560685 2556 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-cilium-run\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:40.561894 kubelet[2556]: I0213 07:37:40.560766 2556 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-xtables-lock\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:40.561894 kubelet[2556]: I0213 07:37:40.560821 2556 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-bpf-maps\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:40.561894 kubelet[2556]: I0213 07:37:40.560876 2556 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-cni-path\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:40.563094 kubelet[2556]: I0213 07:37:40.560946 2556 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10bb01a8-b339-44ec-b7ec-d578044de77a-lib-modules\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:40.563675 kubelet[2556]: I0213 07:37:40.563660 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10bb01a8-b339-44ec-b7ec-d578044de77a-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "10bb01a8-b339-44ec-b7ec-d578044de77a" (UID: "10bb01a8-b339-44ec-b7ec-d578044de77a"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 07:37:40.563675 kubelet[2556]: I0213 07:37:40.563662 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10bb01a8-b339-44ec-b7ec-d578044de77a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "10bb01a8-b339-44ec-b7ec-d578044de77a" (UID: "10bb01a8-b339-44ec-b7ec-d578044de77a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 07:37:40.563742 kubelet[2556]: I0213 07:37:40.563693 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10bb01a8-b339-44ec-b7ec-d578044de77a-kube-api-access-8ckvq" (OuterVolumeSpecName: "kube-api-access-8ckvq") pod "10bb01a8-b339-44ec-b7ec-d578044de77a" (UID: "10bb01a8-b339-44ec-b7ec-d578044de77a"). InnerVolumeSpecName "kube-api-access-8ckvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 07:37:40.563764 kubelet[2556]: I0213 07:37:40.563743 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10bb01a8-b339-44ec-b7ec-d578044de77a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "10bb01a8-b339-44ec-b7ec-d578044de77a" (UID: "10bb01a8-b339-44ec-b7ec-d578044de77a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 07:37:40.563795 kubelet[2556]: I0213 07:37:40.563783 2556 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10bb01a8-b339-44ec-b7ec-d578044de77a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "10bb01a8-b339-44ec-b7ec-d578044de77a" (UID: "10bb01a8-b339-44ec-b7ec-d578044de77a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 07:37:40.657591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1-rootfs.mount: Deactivated successfully. Feb 13 07:37:40.657846 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1-shm.mount: Deactivated successfully. Feb 13 07:37:40.658004 systemd[1]: var-lib-kubelet-pods-10bb01a8\x2db339\x2d44ec\x2db7ec\x2dd578044de77a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8ckvq.mount: Deactivated successfully. Feb 13 07:37:40.658040 systemd[1]: var-lib-kubelet-pods-10bb01a8\x2db339\x2d44ec\x2db7ec\x2dd578044de77a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 07:37:40.658072 systemd[1]: var-lib-kubelet-pods-10bb01a8\x2db339\x2d44ec\x2db7ec\x2dd578044de77a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 07:37:40.658103 systemd[1]: var-lib-kubelet-pods-10bb01a8\x2db339\x2d44ec\x2db7ec\x2dd578044de77a-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 13 07:37:40.661682 kubelet[2556]: I0213 07:37:40.661639 2556 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10bb01a8-b339-44ec-b7ec-d578044de77a-hubble-tls\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:40.661682 kubelet[2556]: I0213 07:37:40.661658 2556 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10bb01a8-b339-44ec-b7ec-d578044de77a-clustermesh-secrets\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:40.661682 kubelet[2556]: I0213 07:37:40.661666 2556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8ckvq\" (UniqueName: \"kubernetes.io/projected/10bb01a8-b339-44ec-b7ec-d578044de77a-kube-api-access-8ckvq\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:40.661682 kubelet[2556]: I0213 07:37:40.661672 2556 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/10bb01a8-b339-44ec-b7ec-d578044de77a-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:40.661682 kubelet[2556]: I0213 07:37:40.661678 2556 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10bb01a8-b339-44ec-b7ec-d578044de77a-cilium-config-path\") on node \"ci-3510.3.2-a-fe1fbff781\" DevicePath \"\"" Feb 13 07:37:40.908636 kubelet[2556]: E0213 07:37:40.908428 2556 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 07:37:41.383644 kubelet[2556]: I0213 07:37:41.383596 2556 scope.go:115] "RemoveContainer" containerID="1033f832a1b87f66adb02b723329c269c7c5a3d3bb6ff8c71763c223c9a00345" Feb 13 07:37:41.384214 env[1478]: time="2024-02-13T07:37:41.384189426Z" level=info msg="RemoveContainer for \"1033f832a1b87f66adb02b723329c269c7c5a3d3bb6ff8c71763c223c9a00345\"" Feb 13 07:37:41.385601 env[1478]: time="2024-02-13T07:37:41.385585013Z" level=info msg="RemoveContainer for \"1033f832a1b87f66adb02b723329c269c7c5a3d3bb6ff8c71763c223c9a00345\" returns successfully" Feb 13 07:37:41.386175 systemd[1]: Removed slice kubepods-burstable-pod10bb01a8_b339_44ec_b7ec_d578044de77a.slice. Feb 13 07:37:41.405801 kubelet[2556]: I0213 07:37:41.405774 2556 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:37:41.405924 kubelet[2556]: E0213 07:37:41.405838 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="10bb01a8-b339-44ec-b7ec-d578044de77a" containerName="mount-cgroup" Feb 13 07:37:41.405924 kubelet[2556]: I0213 07:37:41.405860 2556 memory_manager.go:346] "RemoveStaleState removing state" podUID="10bb01a8-b339-44ec-b7ec-d578044de77a" containerName="mount-cgroup" Feb 13 07:37:41.410044 systemd[1]: Created slice kubepods-burstable-pod3e698926_ca7f_4611_95fb_4ecdfbb787f2.slice. Feb 13 07:37:41.497978 kubelet[2556]: I0213 07:37:41.497889 2556 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=10bb01a8-b339-44ec-b7ec-d578044de77a path="/var/lib/kubelet/pods/10bb01a8-b339-44ec-b7ec-d578044de77a/volumes" Feb 13 07:37:41.566786 kubelet[2556]: I0213 07:37:41.566688 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3e698926-ca7f-4611-95fb-4ecdfbb787f2-cilium-ipsec-secrets\") pod \"cilium-qm46f\" (UID: \"3e698926-ca7f-4611-95fb-4ecdfbb787f2\") " pod="kube-system/cilium-qm46f" Feb 13 07:37:41.567625 kubelet[2556]: I0213 07:37:41.566804 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e698926-ca7f-4611-95fb-4ecdfbb787f2-cilium-run\") pod \"cilium-qm46f\" (UID: \"3e698926-ca7f-4611-95fb-4ecdfbb787f2\") " pod="kube-system/cilium-qm46f" Feb 13 07:37:41.567625 kubelet[2556]: I0213 07:37:41.566948 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e698926-ca7f-4611-95fb-4ecdfbb787f2-cilium-config-path\") pod \"cilium-qm46f\" (UID: \"3e698926-ca7f-4611-95fb-4ecdfbb787f2\") " pod="kube-system/cilium-qm46f" Feb 13 07:37:41.567625 kubelet[2556]: I0213 07:37:41.567138 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e698926-ca7f-4611-95fb-4ecdfbb787f2-hubble-tls\") pod \"cilium-qm46f\" (UID: \"3e698926-ca7f-4611-95fb-4ecdfbb787f2\") " pod="kube-system/cilium-qm46f" Feb 13 07:37:41.567625 kubelet[2556]: I0213 07:37:41.567292 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e698926-ca7f-4611-95fb-4ecdfbb787f2-cni-path\") pod \"cilium-qm46f\" (UID: \"3e698926-ca7f-4611-95fb-4ecdfbb787f2\") " pod="kube-system/cilium-qm46f" Feb 13 07:37:41.567625 kubelet[2556]: I0213 07:37:41.567374 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e698926-ca7f-4611-95fb-4ecdfbb787f2-etc-cni-netd\") pod \"cilium-qm46f\" (UID: \"3e698926-ca7f-4611-95fb-4ecdfbb787f2\") " pod="kube-system/cilium-qm46f" Feb 13 07:37:41.567625 kubelet[2556]: I0213 07:37:41.567483 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e698926-ca7f-4611-95fb-4ecdfbb787f2-host-proc-sys-kernel\") pod \"cilium-qm46f\" (UID: \"3e698926-ca7f-4611-95fb-4ecdfbb787f2\") " pod="kube-system/cilium-qm46f" Feb 13 07:37:41.567625 kubelet[2556]: I0213 07:37:41.567600 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e698926-ca7f-4611-95fb-4ecdfbb787f2-lib-modules\") pod \"cilium-qm46f\" (UID: \"3e698926-ca7f-4611-95fb-4ecdfbb787f2\") " pod="kube-system/cilium-qm46f" Feb 13 07:37:41.568359 kubelet[2556]: I0213 07:37:41.567746 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e698926-ca7f-4611-95fb-4ecdfbb787f2-host-proc-sys-net\") pod \"cilium-qm46f\" (UID: \"3e698926-ca7f-4611-95fb-4ecdfbb787f2\") " pod="kube-system/cilium-qm46f" Feb 13 07:37:41.568359 kubelet[2556]: I0213 07:37:41.567981 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e698926-ca7f-4611-95fb-4ecdfbb787f2-clustermesh-secrets\") pod \"cilium-qm46f\" (UID: \"3e698926-ca7f-4611-95fb-4ecdfbb787f2\") " pod="kube-system/cilium-qm46f" Feb 13 07:37:41.568359 kubelet[2556]: I0213 07:37:41.568186 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e698926-ca7f-4611-95fb-4ecdfbb787f2-xtables-lock\") pod \"cilium-qm46f\" (UID: \"3e698926-ca7f-4611-95fb-4ecdfbb787f2\") " pod="kube-system/cilium-qm46f" Feb 13 07:37:41.568359 kubelet[2556]: I0213 07:37:41.568306 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v878c\" (UniqueName: \"kubernetes.io/projected/3e698926-ca7f-4611-95fb-4ecdfbb787f2-kube-api-access-v878c\") pod \"cilium-qm46f\" (UID: \"3e698926-ca7f-4611-95fb-4ecdfbb787f2\") " pod="kube-system/cilium-qm46f" Feb 13 07:37:41.568830 kubelet[2556]: I0213 07:37:41.568428 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e698926-ca7f-4611-95fb-4ecdfbb787f2-bpf-maps\") pod \"cilium-qm46f\" (UID: \"3e698926-ca7f-4611-95fb-4ecdfbb787f2\") " pod="kube-system/cilium-qm46f" Feb 13 07:37:41.568830 kubelet[2556]: I0213 07:37:41.568561 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e698926-ca7f-4611-95fb-4ecdfbb787f2-cilium-cgroup\") pod \"cilium-qm46f\" (UID: \"3e698926-ca7f-4611-95fb-4ecdfbb787f2\") " pod="kube-system/cilium-qm46f" Feb 13 07:37:41.568830 kubelet[2556]: I0213 07:37:41.568689 2556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e698926-ca7f-4611-95fb-4ecdfbb787f2-hostproc\") pod \"cilium-qm46f\" (UID: \"3e698926-ca7f-4611-95fb-4ecdfbb787f2\") " pod="kube-system/cilium-qm46f" Feb 13 07:37:41.713237 env[1478]: time="2024-02-13T07:37:41.713056700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qm46f,Uid:3e698926-ca7f-4611-95fb-4ecdfbb787f2,Namespace:kube-system,Attempt:0,}" Feb 13 07:37:41.734782 env[1478]: time="2024-02-13T07:37:41.734539664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:37:41.734782 env[1478]: time="2024-02-13T07:37:41.734652646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:37:41.734782 env[1478]: time="2024-02-13T07:37:41.734691016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:37:41.735370 env[1478]: time="2024-02-13T07:37:41.735147252Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e99ea809da0992382638a8b171cabe820e5cf3761c99db1e4a3cedc3edb74dfb pid=5133 runtime=io.containerd.runc.v2 Feb 13 07:37:41.763885 systemd[1]: Started cri-containerd-e99ea809da0992382638a8b171cabe820e5cf3761c99db1e4a3cedc3edb74dfb.scope. Feb 13 07:37:41.811344 env[1478]: time="2024-02-13T07:37:41.811250309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qm46f,Uid:3e698926-ca7f-4611-95fb-4ecdfbb787f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e99ea809da0992382638a8b171cabe820e5cf3761c99db1e4a3cedc3edb74dfb\"" Feb 13 07:37:41.816850 env[1478]: time="2024-02-13T07:37:41.816729473Z" level=info msg="CreateContainer within sandbox \"e99ea809da0992382638a8b171cabe820e5cf3761c99db1e4a3cedc3edb74dfb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 07:37:41.830340 env[1478]: time="2024-02-13T07:37:41.830221230Z" level=info msg="CreateContainer within sandbox \"e99ea809da0992382638a8b171cabe820e5cf3761c99db1e4a3cedc3edb74dfb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c58bc791d3e747681b2b2ef09b56c89dddd2a5218edde436199861c57d183f56\"" Feb 13 07:37:41.831132 env[1478]: time="2024-02-13T07:37:41.831054100Z" level=info msg="StartContainer for \"c58bc791d3e747681b2b2ef09b56c89dddd2a5218edde436199861c57d183f56\"" Feb 13 07:37:41.866425 systemd[1]: Started cri-containerd-c58bc791d3e747681b2b2ef09b56c89dddd2a5218edde436199861c57d183f56.scope. Feb 13 07:37:41.923042 env[1478]: time="2024-02-13T07:37:41.922919121Z" level=info msg="StartContainer for \"c58bc791d3e747681b2b2ef09b56c89dddd2a5218edde436199861c57d183f56\" returns successfully" Feb 13 07:37:41.943748 systemd[1]: cri-containerd-c58bc791d3e747681b2b2ef09b56c89dddd2a5218edde436199861c57d183f56.scope: Deactivated successfully. Feb 13 07:37:41.975771 env[1478]: time="2024-02-13T07:37:41.975615590Z" level=info msg="shim disconnected" id=c58bc791d3e747681b2b2ef09b56c89dddd2a5218edde436199861c57d183f56 Feb 13 07:37:41.975771 env[1478]: time="2024-02-13T07:37:41.975684731Z" level=warning msg="cleaning up after shim disconnected" id=c58bc791d3e747681b2b2ef09b56c89dddd2a5218edde436199861c57d183f56 namespace=k8s.io Feb 13 07:37:41.975771 env[1478]: time="2024-02-13T07:37:41.975702403Z" level=info msg="cleaning up dead shim" Feb 13 07:37:41.984796 env[1478]: time="2024-02-13T07:37:41.984724923Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:37:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5217 runtime=io.containerd.runc.v2\n" Feb 13 07:37:42.394483 env[1478]: time="2024-02-13T07:37:42.394369254Z" level=info msg="CreateContainer within sandbox \"e99ea809da0992382638a8b171cabe820e5cf3761c99db1e4a3cedc3edb74dfb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 07:37:42.402878 env[1478]: time="2024-02-13T07:37:42.402805169Z" level=info msg="CreateContainer within sandbox \"e99ea809da0992382638a8b171cabe820e5cf3761c99db1e4a3cedc3edb74dfb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8f71efe6090e8fcca89338855cb74bdc892e2751e50a3539d2c63464683d9c5d\"" Feb 13 07:37:42.403163 env[1478]: time="2024-02-13T07:37:42.403098506Z" level=info msg="StartContainer for \"8f71efe6090e8fcca89338855cb74bdc892e2751e50a3539d2c63464683d9c5d\"" Feb 13 07:37:42.412433 systemd[1]: Started cri-containerd-8f71efe6090e8fcca89338855cb74bdc892e2751e50a3539d2c63464683d9c5d.scope. Feb 13 07:37:42.424013 env[1478]: time="2024-02-13T07:37:42.423959082Z" level=info msg="StartContainer for \"8f71efe6090e8fcca89338855cb74bdc892e2751e50a3539d2c63464683d9c5d\" returns successfully" Feb 13 07:37:42.427876 systemd[1]: cri-containerd-8f71efe6090e8fcca89338855cb74bdc892e2751e50a3539d2c63464683d9c5d.scope: Deactivated successfully. Feb 13 07:37:42.438216 env[1478]: time="2024-02-13T07:37:42.438154314Z" level=info msg="shim disconnected" id=8f71efe6090e8fcca89338855cb74bdc892e2751e50a3539d2c63464683d9c5d Feb 13 07:37:42.438216 env[1478]: time="2024-02-13T07:37:42.438183205Z" level=warning msg="cleaning up after shim disconnected" id=8f71efe6090e8fcca89338855cb74bdc892e2751e50a3539d2c63464683d9c5d namespace=k8s.io Feb 13 07:37:42.438216 env[1478]: time="2024-02-13T07:37:42.438189787Z" level=info msg="cleaning up dead shim" Feb 13 07:37:42.442293 env[1478]: time="2024-02-13T07:37:42.442249092Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:37:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5277 runtime=io.containerd.runc.v2\n" Feb 13 07:37:42.995426 kubelet[2556]: W0213 07:37:42.995263 2556 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10bb01a8_b339_44ec_b7ec_d578044de77a.slice/cri-containerd-1033f832a1b87f66adb02b723329c269c7c5a3d3bb6ff8c71763c223c9a00345.scope WatchSource:0}: container "1033f832a1b87f66adb02b723329c269c7c5a3d3bb6ff8c71763c223c9a00345" in namespace "k8s.io": not found Feb 13 07:37:43.404140 env[1478]: time="2024-02-13T07:37:43.404004580Z" level=info msg="CreateContainer within sandbox \"e99ea809da0992382638a8b171cabe820e5cf3761c99db1e4a3cedc3edb74dfb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 07:37:43.421983 env[1478]: time="2024-02-13T07:37:43.421917478Z" level=info msg="CreateContainer within sandbox \"e99ea809da0992382638a8b171cabe820e5cf3761c99db1e4a3cedc3edb74dfb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5801b36f6ecf532391a09a6e45511a4e936ef23469c693475196a03f0c949419\"" Feb 13 07:37:43.422327 env[1478]: time="2024-02-13T07:37:43.422310875Z" level=info msg="StartContainer for \"5801b36f6ecf532391a09a6e45511a4e936ef23469c693475196a03f0c949419\"" Feb 13 07:37:43.422718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1715771341.mount: Deactivated successfully. Feb 13 07:37:43.431869 systemd[1]: Started cri-containerd-5801b36f6ecf532391a09a6e45511a4e936ef23469c693475196a03f0c949419.scope. Feb 13 07:37:43.445321 env[1478]: time="2024-02-13T07:37:43.445290542Z" level=info msg="StartContainer for \"5801b36f6ecf532391a09a6e45511a4e936ef23469c693475196a03f0c949419\" returns successfully" Feb 13 07:37:43.446910 systemd[1]: cri-containerd-5801b36f6ecf532391a09a6e45511a4e936ef23469c693475196a03f0c949419.scope: Deactivated successfully. Feb 13 07:37:43.467992 env[1478]: time="2024-02-13T07:37:43.467922521Z" level=info msg="shim disconnected" id=5801b36f6ecf532391a09a6e45511a4e936ef23469c693475196a03f0c949419 Feb 13 07:37:43.467992 env[1478]: time="2024-02-13T07:37:43.467951187Z" level=warning msg="cleaning up after shim disconnected" id=5801b36f6ecf532391a09a6e45511a4e936ef23469c693475196a03f0c949419 namespace=k8s.io Feb 13 07:37:43.467992 env[1478]: time="2024-02-13T07:37:43.467957524Z" level=info msg="cleaning up dead shim" Feb 13 07:37:43.471786 env[1478]: time="2024-02-13T07:37:43.471738293Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:37:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5333 runtime=io.containerd.runc.v2\n" Feb 13 07:37:43.678351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5801b36f6ecf532391a09a6e45511a4e936ef23469c693475196a03f0c949419-rootfs.mount: Deactivated successfully. Feb 13 07:37:44.383207 kubelet[2556]: I0213 07:37:44.383148 2556 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-fe1fbff781" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-13 07:37:44.382994451 +0000 UTC m=+1308.946520217 LastTransitionTime:2024-02-13 07:37:44.382994451 +0000 UTC m=+1308.946520217 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 13 07:37:44.402483 env[1478]: time="2024-02-13T07:37:44.402455848Z" level=info msg="CreateContainer within sandbox \"e99ea809da0992382638a8b171cabe820e5cf3761c99db1e4a3cedc3edb74dfb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 07:37:44.406938 env[1478]: time="2024-02-13T07:37:44.406911041Z" level=info msg="CreateContainer within sandbox \"e99ea809da0992382638a8b171cabe820e5cf3761c99db1e4a3cedc3edb74dfb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4ef0c274f695742aa1019ea6d477ac860e0d479f28984e2f30df422fbd5bd269\"" Feb 13 07:37:44.407290 env[1478]: time="2024-02-13T07:37:44.407271510Z" level=info msg="StartContainer for \"4ef0c274f695742aa1019ea6d477ac860e0d479f28984e2f30df422fbd5bd269\"" Feb 13 07:37:44.408247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2487534008.mount: Deactivated successfully. Feb 13 07:37:44.418858 systemd[1]: Started cri-containerd-4ef0c274f695742aa1019ea6d477ac860e0d479f28984e2f30df422fbd5bd269.scope. Feb 13 07:37:44.433911 env[1478]: time="2024-02-13T07:37:44.433850018Z" level=info msg="StartContainer for \"4ef0c274f695742aa1019ea6d477ac860e0d479f28984e2f30df422fbd5bd269\" returns successfully" Feb 13 07:37:44.434531 systemd[1]: cri-containerd-4ef0c274f695742aa1019ea6d477ac860e0d479f28984e2f30df422fbd5bd269.scope: Deactivated successfully. Feb 13 07:37:44.447796 env[1478]: time="2024-02-13T07:37:44.447733230Z" level=info msg="shim disconnected" id=4ef0c274f695742aa1019ea6d477ac860e0d479f28984e2f30df422fbd5bd269 Feb 13 07:37:44.447796 env[1478]: time="2024-02-13T07:37:44.447771036Z" level=warning msg="cleaning up after shim disconnected" id=4ef0c274f695742aa1019ea6d477ac860e0d479f28984e2f30df422fbd5bd269 namespace=k8s.io Feb 13 07:37:44.447796 env[1478]: time="2024-02-13T07:37:44.447780381Z" level=info msg="cleaning up dead shim" Feb 13 07:37:44.452556 env[1478]: time="2024-02-13T07:37:44.452500987Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:37:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5388 runtime=io.containerd.runc.v2\n" Feb 13 07:37:44.678414 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ef0c274f695742aa1019ea6d477ac860e0d479f28984e2f30df422fbd5bd269-rootfs.mount: Deactivated successfully. Feb 13 07:37:45.416709 env[1478]: time="2024-02-13T07:37:45.416605790Z" level=info msg="CreateContainer within sandbox \"e99ea809da0992382638a8b171cabe820e5cf3761c99db1e4a3cedc3edb74dfb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 07:37:45.427134 env[1478]: time="2024-02-13T07:37:45.427092164Z" level=info msg="CreateContainer within sandbox \"e99ea809da0992382638a8b171cabe820e5cf3761c99db1e4a3cedc3edb74dfb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"27af34706b3af9295838afa08758994d4b5fba96713dae2ce6b1ab0cdedc1715\"" Feb 13 07:37:45.427431 env[1478]: time="2024-02-13T07:37:45.427415475Z" level=info msg="StartContainer for \"27af34706b3af9295838afa08758994d4b5fba96713dae2ce6b1ab0cdedc1715\"" Feb 13 07:37:45.436032 systemd[1]: Started cri-containerd-27af34706b3af9295838afa08758994d4b5fba96713dae2ce6b1ab0cdedc1715.scope. Feb 13 07:37:45.449230 env[1478]: time="2024-02-13T07:37:45.449204392Z" level=info msg="StartContainer for \"27af34706b3af9295838afa08758994d4b5fba96713dae2ce6b1ab0cdedc1715\" returns successfully" Feb 13 07:37:45.593399 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 07:37:46.110841 kubelet[2556]: W0213 07:37:46.110723 2556 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e698926_ca7f_4611_95fb_4ecdfbb787f2.slice/cri-containerd-c58bc791d3e747681b2b2ef09b56c89dddd2a5218edde436199861c57d183f56.scope WatchSource:0}: task c58bc791d3e747681b2b2ef09b56c89dddd2a5218edde436199861c57d183f56 not found: not found Feb 13 07:37:46.458269 kubelet[2556]: I0213 07:37:46.458050 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-qm46f" podStartSLOduration=5.457959185 podCreationTimestamp="2024-02-13 07:37:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 07:37:46.457200468 +0000 UTC m=+1311.020726253" watchObservedRunningTime="2024-02-13 07:37:46.457959185 +0000 UTC m=+1311.021484950" Feb 13 07:37:48.386680 systemd-networkd[1334]: lxc_health: Link UP Feb 13 07:37:48.411442 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 07:37:48.411484 systemd-networkd[1334]: lxc_health: Gained carrier Feb 13 07:37:49.221035 kubelet[2556]: W0213 07:37:49.220984 2556 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e698926_ca7f_4611_95fb_4ecdfbb787f2.slice/cri-containerd-8f71efe6090e8fcca89338855cb74bdc892e2751e50a3539d2c63464683d9c5d.scope WatchSource:0}: task 8f71efe6090e8fcca89338855cb74bdc892e2751e50a3539d2c63464683d9c5d not found: not found Feb 13 07:37:49.906518 systemd-networkd[1334]: lxc_health: Gained IPv6LL Feb 13 07:37:52.327048 kubelet[2556]: W0213 07:37:52.326964 2556 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e698926_ca7f_4611_95fb_4ecdfbb787f2.slice/cri-containerd-5801b36f6ecf532391a09a6e45511a4e936ef23469c693475196a03f0c949419.scope WatchSource:0}: task 5801b36f6ecf532391a09a6e45511a4e936ef23469c693475196a03f0c949419 not found: not found Feb 13 07:37:54.397576 sshd[4987]: pam_unix(sshd:session): session closed for user core Feb 13 07:37:54.399298 systemd[1]: sshd@28-145.40.90.207:22-139.178.68.195:52260.service: Deactivated successfully. Feb 13 07:37:54.399853 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 07:37:54.400280 systemd-logind[1466]: Session 28 logged out. Waiting for processes to exit. Feb 13 07:37:54.400903 systemd-logind[1466]: Removed session 28. Feb 13 07:37:55.435363 kubelet[2556]: W0213 07:37:55.435238 2556 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e698926_ca7f_4611_95fb_4ecdfbb787f2.slice/cri-containerd-4ef0c274f695742aa1019ea6d477ac860e0d479f28984e2f30df422fbd5bd269.scope WatchSource:0}: task 4ef0c274f695742aa1019ea6d477ac860e0d479f28984e2f30df422fbd5bd269 not found: not found Feb 13 07:37:55.526008 env[1478]: time="2024-02-13T07:37:55.525858034Z" level=info msg="StopPodSandbox for \"7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1\"" Feb 13 07:37:55.526886 env[1478]: time="2024-02-13T07:37:55.526090143Z" level=info msg="TearDown network for sandbox \"7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1\" successfully" Feb 13 07:37:55.526886 env[1478]: time="2024-02-13T07:37:55.526187307Z" level=info msg="StopPodSandbox for \"7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1\" returns successfully" Feb 13 07:37:55.527127 env[1478]: time="2024-02-13T07:37:55.527056767Z" level=info msg="RemovePodSandbox for \"7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1\"" Feb 13 07:37:55.527241 env[1478]: time="2024-02-13T07:37:55.527130159Z" level=info msg="Forcibly stopping sandbox \"7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1\"" Feb 13 07:37:55.527352 env[1478]: time="2024-02-13T07:37:55.527311792Z" level=info msg="TearDown network for sandbox \"7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1\" successfully" Feb 13 07:37:55.532255 env[1478]: time="2024-02-13T07:37:55.532184139Z" level=info msg="RemovePodSandbox \"7018fa7be04d7d5f5cb090878824fb9d2b44320b676ae3abd1003d012bebc1b1\" returns successfully" Feb 13 07:37:55.533054 env[1478]: time="2024-02-13T07:37:55.532944461Z" level=info msg="StopPodSandbox for \"4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8\"" Feb 13 07:37:55.533279 env[1478]: time="2024-02-13T07:37:55.533132034Z" level=info msg="TearDown network for sandbox \"4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8\" successfully" Feb 13 07:37:55.533279 env[1478]: time="2024-02-13T07:37:55.533222381Z" level=info msg="StopPodSandbox for \"4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8\" returns successfully" Feb 13 07:37:55.534021 env[1478]: time="2024-02-13T07:37:55.533915335Z" level=info msg="RemovePodSandbox for \"4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8\"" Feb 13 07:37:55.534227 env[1478]: time="2024-02-13T07:37:55.533986306Z" level=info msg="Forcibly stopping sandbox \"4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8\"" Feb 13 07:37:55.534227 env[1478]: time="2024-02-13T07:37:55.534166535Z" level=info msg="TearDown network for sandbox \"4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8\" successfully" Feb 13 07:37:55.538114 env[1478]: time="2024-02-13T07:37:55.538043020Z" level=info msg="RemovePodSandbox \"4dbe5eac39398698d03c3d4d809e485039f070075f3739eeda9c89d2566c5fd8\" returns successfully" Feb 13 07:37:55.538741 env[1478]: time="2024-02-13T07:37:55.538665831Z" level=info msg="StopPodSandbox for \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\"" Feb 13 07:37:55.539045 env[1478]: time="2024-02-13T07:37:55.538907716Z" level=info msg="TearDown network for sandbox \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\" successfully" Feb 13 07:37:55.539273 env[1478]: time="2024-02-13T07:37:55.539040845Z" level=info msg="StopPodSandbox for \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\" returns successfully" Feb 13 07:37:55.539883 env[1478]: time="2024-02-13T07:37:55.539775902Z" level=info msg="RemovePodSandbox for \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\"" Feb 13 07:37:55.540172 env[1478]: time="2024-02-13T07:37:55.539850186Z" level=info msg="Forcibly stopping sandbox \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\"" Feb 13 07:37:55.540172 env[1478]: time="2024-02-13T07:37:55.540051477Z" level=info msg="TearDown network for sandbox \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\" successfully" Feb 13 07:37:55.544404 env[1478]: time="2024-02-13T07:37:55.544314252Z" level=info msg="RemovePodSandbox \"2c58b38e753abc22197a6dc12f6861ce8a2c54d2a92cb456c09ad33a77dbca21\" returns successfully"