Dec 13 03:42:42.563148 kernel: microcode: microcode updated early to revision 0xf4, date = 2022-07-31 Dec 13 03:42:42.563163 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 03:42:42.563170 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 03:42:42.563174 kernel: BIOS-provided physical RAM map: Dec 13 03:42:42.563178 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Dec 13 03:42:42.563182 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Dec 13 03:42:42.563187 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Dec 13 03:42:42.563192 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Dec 13 03:42:42.563196 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Dec 13 03:42:42.563200 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000006e2d8fff] usable Dec 13 03:42:42.563204 kernel: BIOS-e820: [mem 0x000000006e2d9000-0x000000006e2d9fff] ACPI NVS Dec 13 03:42:42.563208 kernel: BIOS-e820: [mem 0x000000006e2da000-0x000000006e2dafff] reserved Dec 13 03:42:42.563212 kernel: BIOS-e820: [mem 0x000000006e2db000-0x0000000077fc4fff] usable Dec 13 03:42:42.563216 kernel: BIOS-e820: [mem 0x0000000077fc5000-0x00000000790a7fff] reserved Dec 13 03:42:42.563222 kernel: BIOS-e820: [mem 0x00000000790a8000-0x0000000079230fff] usable Dec 13 03:42:42.563226 kernel: BIOS-e820: [mem 0x0000000079231000-0x0000000079662fff] ACPI NVS Dec 13 03:42:42.563230 kernel: BIOS-e820: [mem 0x0000000079663000-0x000000007befefff] reserved Dec 13 03:42:42.563235 kernel: BIOS-e820: [mem 0x000000007beff000-0x000000007befffff] usable Dec 13 03:42:42.563239 kernel: BIOS-e820: [mem 0x000000007bf00000-0x000000007f7fffff] reserved Dec 13 03:42:42.563243 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 03:42:42.563248 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Dec 13 03:42:42.563252 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Dec 13 03:42:42.563256 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Dec 13 03:42:42.563261 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Dec 13 03:42:42.563266 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000087f7fffff] usable Dec 13 03:42:42.563270 kernel: NX (Execute Disable) protection: active Dec 13 03:42:42.563274 kernel: SMBIOS 3.2.1 present. Dec 13 03:42:42.563279 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Dec 13 03:42:42.563283 kernel: tsc: Detected 3400.000 MHz processor Dec 13 03:42:42.563287 kernel: tsc: Detected 3399.906 MHz TSC Dec 13 03:42:42.563292 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 03:42:42.563297 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 03:42:42.563301 kernel: last_pfn = 0x87f800 max_arch_pfn = 0x400000000 Dec 13 03:42:42.563307 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 03:42:42.563311 kernel: last_pfn = 0x7bf00 max_arch_pfn = 0x400000000 Dec 13 03:42:42.563316 kernel: Using GB pages for direct mapping Dec 13 03:42:42.563320 kernel: ACPI: Early table checksum verification disabled Dec 13 03:42:42.563325 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Dec 13 03:42:42.563329 kernel: ACPI: XSDT 0x00000000795440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Dec 13 03:42:42.563334 kernel: ACPI: FACP 0x0000000079580620 000114 (v06 01072009 AMI 00010013) Dec 13 03:42:42.563341 kernel: ACPI: DSDT 0x0000000079544268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Dec 13 03:42:42.563346 kernel: ACPI: FACS 0x0000000079662F80 000040 Dec 13 03:42:42.563351 kernel: ACPI: APIC 0x0000000079580738 00012C (v04 01072009 AMI 00010013) Dec 13 03:42:42.563356 kernel: ACPI: FPDT 0x0000000079580868 000044 (v01 01072009 AMI 00010013) Dec 13 03:42:42.563361 kernel: ACPI: FIDT 0x00000000795808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Dec 13 03:42:42.563366 kernel: ACPI: MCFG 0x0000000079580950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Dec 13 03:42:42.563371 kernel: ACPI: SPMI 0x0000000079580990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Dec 13 03:42:42.563376 kernel: ACPI: SSDT 0x00000000795809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Dec 13 03:42:42.563381 kernel: ACPI: SSDT 0x00000000795824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Dec 13 03:42:42.563386 kernel: ACPI: SSDT 0x00000000795856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Dec 13 03:42:42.563391 kernel: ACPI: HPET 0x00000000795879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 03:42:42.563396 kernel: ACPI: SSDT 0x0000000079587A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Dec 13 03:42:42.563400 kernel: ACPI: SSDT 0x00000000795889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Dec 13 03:42:42.563405 kernel: ACPI: UEFI 0x00000000795892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 03:42:42.563410 kernel: ACPI: LPIT 0x0000000079589318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 03:42:42.563415 kernel: ACPI: SSDT 0x00000000795893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Dec 13 03:42:42.563421 kernel: ACPI: SSDT 0x000000007958BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Dec 13 03:42:42.563426 kernel: ACPI: DBGP 0x000000007958D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 03:42:42.563430 kernel: ACPI: DBG2 0x000000007958D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Dec 13 03:42:42.563435 kernel: ACPI: SSDT 0x000000007958D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Dec 13 03:42:42.563440 kernel: ACPI: DMAR 0x000000007958EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Dec 13 03:42:42.563445 kernel: ACPI: SSDT 0x000000007958ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Dec 13 03:42:42.563450 kernel: ACPI: TPM2 0x000000007958EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Dec 13 03:42:42.563455 kernel: ACPI: SSDT 0x000000007958EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Dec 13 03:42:42.563460 kernel: ACPI: WSMT 0x000000007958FC28 000028 (v01 'n 01072009 AMI 00010013) Dec 13 03:42:42.563465 kernel: ACPI: EINJ 0x000000007958FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Dec 13 03:42:42.563470 kernel: ACPI: ERST 0x000000007958FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Dec 13 03:42:42.563475 kernel: ACPI: BERT 0x000000007958FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Dec 13 03:42:42.563480 kernel: ACPI: HEST 0x000000007958FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Dec 13 03:42:42.563485 kernel: ACPI: SSDT 0x0000000079590260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Dec 13 03:42:42.563489 kernel: ACPI: Reserving FACP table memory at [mem 0x79580620-0x79580733] Dec 13 03:42:42.563494 kernel: ACPI: Reserving DSDT table memory at [mem 0x79544268-0x7958061e] Dec 13 03:42:42.563499 kernel: ACPI: Reserving FACS table memory at [mem 0x79662f80-0x79662fbf] Dec 13 03:42:42.563505 kernel: ACPI: Reserving APIC table memory at [mem 0x79580738-0x79580863] Dec 13 03:42:42.563510 kernel: ACPI: Reserving FPDT table memory at [mem 0x79580868-0x795808ab] Dec 13 03:42:42.563514 kernel: ACPI: Reserving FIDT table memory at [mem 0x795808b0-0x7958094b] Dec 13 03:42:42.563519 kernel: ACPI: Reserving MCFG table memory at [mem 0x79580950-0x7958098b] Dec 13 03:42:42.563524 kernel: ACPI: Reserving SPMI table memory at [mem 0x79580990-0x795809d0] Dec 13 03:42:42.563529 kernel: ACPI: Reserving SSDT table memory at [mem 0x795809d8-0x795824f3] Dec 13 03:42:42.563534 kernel: ACPI: Reserving SSDT table memory at [mem 0x795824f8-0x795856bd] Dec 13 03:42:42.563538 kernel: ACPI: Reserving SSDT table memory at [mem 0x795856c0-0x795879ea] Dec 13 03:42:42.563543 kernel: ACPI: Reserving HPET table memory at [mem 0x795879f0-0x79587a27] Dec 13 03:42:42.563549 kernel: ACPI: Reserving SSDT table memory at [mem 0x79587a28-0x795889d5] Dec 13 03:42:42.563554 kernel: ACPI: Reserving SSDT table memory at [mem 0x795889d8-0x795892ce] Dec 13 03:42:42.563559 kernel: ACPI: Reserving UEFI table memory at [mem 0x795892d0-0x79589311] Dec 13 03:42:42.563564 kernel: ACPI: Reserving LPIT table memory at [mem 0x79589318-0x795893ab] Dec 13 03:42:42.563569 kernel: ACPI: Reserving SSDT table memory at [mem 0x795893b0-0x7958bb8d] Dec 13 03:42:42.563573 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958bb90-0x7958d071] Dec 13 03:42:42.563581 kernel: ACPI: Reserving DBGP table memory at [mem 0x7958d078-0x7958d0ab] Dec 13 03:42:42.563586 kernel: ACPI: Reserving DBG2 table memory at [mem 0x7958d0b0-0x7958d103] Dec 13 03:42:42.563591 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958d108-0x7958ec6e] Dec 13 03:42:42.563597 kernel: ACPI: Reserving DMAR table memory at [mem 0x7958ec70-0x7958ed17] Dec 13 03:42:42.563602 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ed18-0x7958ee5b] Dec 13 03:42:42.563607 kernel: ACPI: Reserving TPM2 table memory at [mem 0x7958ee60-0x7958ee93] Dec 13 03:42:42.563611 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ee98-0x7958fc26] Dec 13 03:42:42.563616 kernel: ACPI: Reserving WSMT table memory at [mem 0x7958fc28-0x7958fc4f] Dec 13 03:42:42.563621 kernel: ACPI: Reserving EINJ table memory at [mem 0x7958fc50-0x7958fd7f] Dec 13 03:42:42.563626 kernel: ACPI: Reserving ERST table memory at [mem 0x7958fd80-0x7958ffaf] Dec 13 03:42:42.563631 kernel: ACPI: Reserving BERT table memory at [mem 0x7958ffb0-0x7958ffdf] Dec 13 03:42:42.563636 kernel: ACPI: Reserving HEST table memory at [mem 0x7958ffe0-0x7959025b] Dec 13 03:42:42.563641 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590260-0x795903c1] Dec 13 03:42:42.563646 kernel: No NUMA configuration found Dec 13 03:42:42.563651 kernel: Faking a node at [mem 0x0000000000000000-0x000000087f7fffff] Dec 13 03:42:42.563656 kernel: NODE_DATA(0) allocated [mem 0x87f7fa000-0x87f7fffff] Dec 13 03:42:42.563661 kernel: Zone ranges: Dec 13 03:42:42.563666 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 03:42:42.563671 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 03:42:42.563675 kernel: Normal [mem 0x0000000100000000-0x000000087f7fffff] Dec 13 03:42:42.563680 kernel: Movable zone start for each node Dec 13 03:42:42.563686 kernel: Early memory node ranges Dec 13 03:42:42.563691 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Dec 13 03:42:42.563696 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Dec 13 03:42:42.563701 kernel: node 0: [mem 0x0000000040400000-0x000000006e2d8fff] Dec 13 03:42:42.563706 kernel: node 0: [mem 0x000000006e2db000-0x0000000077fc4fff] Dec 13 03:42:42.563710 kernel: node 0: [mem 0x00000000790a8000-0x0000000079230fff] Dec 13 03:42:42.563715 kernel: node 0: [mem 0x000000007beff000-0x000000007befffff] Dec 13 03:42:42.563720 kernel: node 0: [mem 0x0000000100000000-0x000000087f7fffff] Dec 13 03:42:42.563725 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000087f7fffff] Dec 13 03:42:42.563734 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 03:42:42.563739 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Dec 13 03:42:42.563744 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 13 03:42:42.563750 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Dec 13 03:42:42.563755 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Dec 13 03:42:42.563761 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Dec 13 03:42:42.563766 kernel: On node 0, zone Normal: 16640 pages in unavailable ranges Dec 13 03:42:42.563771 kernel: On node 0, zone Normal: 2048 pages in unavailable ranges Dec 13 03:42:42.563777 kernel: ACPI: PM-Timer IO Port: 0x1808 Dec 13 03:42:42.563782 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Dec 13 03:42:42.563788 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Dec 13 03:42:42.563793 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Dec 13 03:42:42.563798 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Dec 13 03:42:42.563803 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Dec 13 03:42:42.563808 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Dec 13 03:42:42.563813 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Dec 13 03:42:42.563819 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Dec 13 03:42:42.563824 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Dec 13 03:42:42.563830 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Dec 13 03:42:42.563835 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Dec 13 03:42:42.563840 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Dec 13 03:42:42.563845 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Dec 13 03:42:42.563850 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Dec 13 03:42:42.563855 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Dec 13 03:42:42.563860 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Dec 13 03:42:42.563866 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Dec 13 03:42:42.563872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 03:42:42.563877 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 03:42:42.563882 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 03:42:42.563887 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 03:42:42.563892 kernel: TSC deadline timer available Dec 13 03:42:42.563898 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Dec 13 03:42:42.563903 kernel: [mem 0x7f800000-0xdfffffff] available for PCI devices Dec 13 03:42:42.563908 kernel: Booting paravirtualized kernel on bare hardware Dec 13 03:42:42.563914 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 03:42:42.563920 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 03:42:42.563925 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 03:42:42.563930 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 03:42:42.563935 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 03:42:42.563941 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8222327 Dec 13 03:42:42.563946 kernel: Policy zone: Normal Dec 13 03:42:42.563952 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 03:42:42.563957 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 03:42:42.563963 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Dec 13 03:42:42.563968 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 13 03:42:42.563974 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 03:42:42.563979 kernel: Memory: 32681612K/33411988K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 730116K reserved, 0K cma-reserved) Dec 13 03:42:42.563985 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 03:42:42.563990 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 03:42:42.563995 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 03:42:42.564000 kernel: rcu: Hierarchical RCU implementation. Dec 13 03:42:42.564006 kernel: rcu: RCU event tracing is enabled. Dec 13 03:42:42.564012 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 03:42:42.564017 kernel: Rude variant of Tasks RCU enabled. Dec 13 03:42:42.564022 kernel: Tracing variant of Tasks RCU enabled. Dec 13 03:42:42.564027 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 03:42:42.564033 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 03:42:42.564038 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Dec 13 03:42:42.564043 kernel: random: crng init done Dec 13 03:42:42.564048 kernel: Console: colour dummy device 80x25 Dec 13 03:42:42.564053 kernel: printk: console [tty0] enabled Dec 13 03:42:42.564059 kernel: printk: console [ttyS1] enabled Dec 13 03:42:42.564065 kernel: ACPI: Core revision 20210730 Dec 13 03:42:42.564070 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Dec 13 03:42:42.564075 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 03:42:42.564080 kernel: DMAR: Host address width 39 Dec 13 03:42:42.564085 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Dec 13 03:42:42.564091 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Dec 13 03:42:42.564096 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Dec 13 03:42:42.564101 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Dec 13 03:42:42.564107 kernel: DMAR: RMRR base: 0x00000079f11000 end: 0x0000007a15afff Dec 13 03:42:42.564112 kernel: DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff Dec 13 03:42:42.564118 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Dec 13 03:42:42.564123 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Dec 13 03:42:42.564128 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Dec 13 03:42:42.564133 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Dec 13 03:42:42.564138 kernel: x2apic enabled Dec 13 03:42:42.564143 kernel: Switched APIC routing to cluster x2apic. Dec 13 03:42:42.564149 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 03:42:42.564155 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Dec 13 03:42:42.564160 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Dec 13 03:42:42.564165 kernel: CPU0: Thermal monitoring enabled (TM1) Dec 13 03:42:42.564171 kernel: process: using mwait in idle threads Dec 13 03:42:42.564176 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 03:42:42.564181 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 03:42:42.564186 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 03:42:42.564192 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 03:42:42.564197 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 03:42:42.564203 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 03:42:42.564208 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 13 03:42:42.564213 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 03:42:42.564219 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 13 03:42:42.564224 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 13 03:42:42.564229 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 03:42:42.564234 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 03:42:42.564240 kernel: TAA: Mitigation: TSX disabled Dec 13 03:42:42.564245 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Dec 13 03:42:42.564251 kernel: SRBDS: Mitigation: Microcode Dec 13 03:42:42.564256 kernel: GDS: Vulnerable: No microcode Dec 13 03:42:42.564261 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 03:42:42.564266 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 03:42:42.564272 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 03:42:42.564277 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 03:42:42.564282 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 03:42:42.564287 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 03:42:42.564292 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 03:42:42.564298 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 03:42:42.564304 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Dec 13 03:42:42.564309 kernel: Freeing SMP alternatives memory: 32K Dec 13 03:42:42.564314 kernel: pid_max: default: 32768 minimum: 301 Dec 13 03:42:42.564319 kernel: LSM: Security Framework initializing Dec 13 03:42:42.564324 kernel: SELinux: Initializing. Dec 13 03:42:42.564330 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 03:42:42.564335 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 03:42:42.564340 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Dec 13 03:42:42.564346 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Dec 13 03:42:42.564351 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Dec 13 03:42:42.564357 kernel: ... version: 4 Dec 13 03:42:42.564362 kernel: ... bit width: 48 Dec 13 03:42:42.564367 kernel: ... generic registers: 4 Dec 13 03:42:42.564372 kernel: ... value mask: 0000ffffffffffff Dec 13 03:42:42.564377 kernel: ... max period: 00007fffffffffff Dec 13 03:42:42.564383 kernel: ... fixed-purpose events: 3 Dec 13 03:42:42.564388 kernel: ... event mask: 000000070000000f Dec 13 03:42:42.564394 kernel: signal: max sigframe size: 2032 Dec 13 03:42:42.564399 kernel: rcu: Hierarchical SRCU implementation. Dec 13 03:42:42.564404 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Dec 13 03:42:42.564409 kernel: smp: Bringing up secondary CPUs ... Dec 13 03:42:42.564414 kernel: x86: Booting SMP configuration: Dec 13 03:42:42.564420 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Dec 13 03:42:42.564425 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 03:42:42.564430 kernel: #9 #10 #11 #12 #13 #14 #15 Dec 13 03:42:42.564436 kernel: smp: Brought up 1 node, 16 CPUs Dec 13 03:42:42.564442 kernel: smpboot: Max logical packages: 1 Dec 13 03:42:42.564447 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Dec 13 03:42:42.564452 kernel: devtmpfs: initialized Dec 13 03:42:42.564457 kernel: x86/mm: Memory block size: 128MB Dec 13 03:42:42.564462 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6e2d9000-0x6e2d9fff] (4096 bytes) Dec 13 03:42:42.564468 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x79231000-0x79662fff] (4399104 bytes) Dec 13 03:42:42.564473 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 03:42:42.564478 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 03:42:42.564484 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 03:42:42.564489 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 03:42:42.564494 kernel: audit: initializing netlink subsys (disabled) Dec 13 03:42:42.564500 kernel: audit: type=2000 audit(1734061357.122:1): state=initialized audit_enabled=0 res=1 Dec 13 03:42:42.564505 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 03:42:42.564510 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 03:42:42.564515 kernel: cpuidle: using governor menu Dec 13 03:42:42.564520 kernel: ACPI: bus type PCI registered Dec 13 03:42:42.564525 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 03:42:42.564531 kernel: dca service started, version 1.12.1 Dec 13 03:42:42.564537 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Dec 13 03:42:42.564542 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Dec 13 03:42:42.564547 kernel: PCI: Using configuration type 1 for base access Dec 13 03:42:42.564552 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Dec 13 03:42:42.564557 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 03:42:42.564563 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 03:42:42.564568 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 03:42:42.564573 kernel: ACPI: Added _OSI(Module Device) Dec 13 03:42:42.564581 kernel: ACPI: Added _OSI(Processor Device) Dec 13 03:42:42.564586 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 03:42:42.564591 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 03:42:42.564596 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 03:42:42.564602 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 03:42:42.564607 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 03:42:42.564612 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Dec 13 03:42:42.564617 kernel: ACPI: Dynamic OEM Table Load: Dec 13 03:42:42.564622 kernel: ACPI: SSDT 0xFFFF93854021B100 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Dec 13 03:42:42.564629 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Dec 13 03:42:42.564634 kernel: ACPI: Dynamic OEM Table Load: Dec 13 03:42:42.564639 kernel: ACPI: SSDT 0xFFFF938541CEF000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Dec 13 03:42:42.564644 kernel: ACPI: Dynamic OEM Table Load: Dec 13 03:42:42.564649 kernel: ACPI: SSDT 0xFFFF938541C5C800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Dec 13 03:42:42.564655 kernel: ACPI: Dynamic OEM Table Load: Dec 13 03:42:42.564660 kernel: ACPI: SSDT 0xFFFF938541D4A800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Dec 13 03:42:42.564665 kernel: ACPI: Dynamic OEM Table Load: Dec 13 03:42:42.564670 kernel: ACPI: SSDT 0xFFFF938540149000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Dec 13 03:42:42.564675 kernel: ACPI: Dynamic OEM Table Load: Dec 13 03:42:42.564681 kernel: ACPI: SSDT 0xFFFF938541CE9800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Dec 13 03:42:42.564686 kernel: ACPI: Interpreter enabled Dec 13 03:42:42.564691 kernel: ACPI: PM: (supports S0 S5) Dec 13 03:42:42.564697 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 03:42:42.564702 kernel: HEST: Enabling Firmware First mode for corrected errors. Dec 13 03:42:42.564707 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Dec 13 03:42:42.564712 kernel: HEST: Table parsing has been initialized. Dec 13 03:42:42.564717 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Dec 13 03:42:42.564723 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 03:42:42.564729 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Dec 13 03:42:42.564734 kernel: ACPI: PM: Power Resource [USBC] Dec 13 03:42:42.564739 kernel: ACPI: PM: Power Resource [V0PR] Dec 13 03:42:42.564744 kernel: ACPI: PM: Power Resource [V1PR] Dec 13 03:42:42.564749 kernel: ACPI: PM: Power Resource [V2PR] Dec 13 03:42:42.564754 kernel: ACPI: PM: Power Resource [WRST] Dec 13 03:42:42.564760 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Dec 13 03:42:42.564765 kernel: ACPI: PM: Power Resource [FN00] Dec 13 03:42:42.564770 kernel: ACPI: PM: Power Resource [FN01] Dec 13 03:42:42.564776 kernel: ACPI: PM: Power Resource [FN02] Dec 13 03:42:42.564781 kernel: ACPI: PM: Power Resource [FN03] Dec 13 03:42:42.564786 kernel: ACPI: PM: Power Resource [FN04] Dec 13 03:42:42.564791 kernel: ACPI: PM: Power Resource [PIN] Dec 13 03:42:42.564797 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Dec 13 03:42:42.564863 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 03:42:42.564911 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Dec 13 03:42:42.564954 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Dec 13 03:42:42.564963 kernel: PCI host bridge to bus 0000:00 Dec 13 03:42:42.565008 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 03:42:42.565047 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 03:42:42.565086 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 03:42:42.565124 kernel: pci_bus 0000:00: root bus resource [mem 0x7f800000-0xdfffffff window] Dec 13 03:42:42.565162 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Dec 13 03:42:42.565199 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Dec 13 03:42:42.565253 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Dec 13 03:42:42.565306 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Dec 13 03:42:42.565352 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Dec 13 03:42:42.565402 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Dec 13 03:42:42.565447 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Dec 13 03:42:42.565497 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Dec 13 03:42:42.565543 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x94000000-0x94ffffff 64bit] Dec 13 03:42:42.565590 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Dec 13 03:42:42.565634 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Dec 13 03:42:42.565684 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Dec 13 03:42:42.565729 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9651f000-0x9651ffff 64bit] Dec 13 03:42:42.565776 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Dec 13 03:42:42.565822 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9651e000-0x9651efff 64bit] Dec 13 03:42:42.565870 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Dec 13 03:42:42.565914 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x96500000-0x9650ffff 64bit] Dec 13 03:42:42.565958 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Dec 13 03:42:42.566008 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Dec 13 03:42:42.566053 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x96512000-0x96513fff 64bit] Dec 13 03:42:42.566098 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9651d000-0x9651dfff 64bit] Dec 13 03:42:42.566145 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Dec 13 03:42:42.566188 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 03:42:42.566235 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Dec 13 03:42:42.566278 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 03:42:42.566332 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Dec 13 03:42:42.566378 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9651a000-0x9651afff 64bit] Dec 13 03:42:42.566421 kernel: pci 0000:00:16.0: PME# supported from D3hot Dec 13 03:42:42.566481 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Dec 13 03:42:42.566522 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x96519000-0x96519fff 64bit] Dec 13 03:42:42.566564 kernel: pci 0000:00:16.1: PME# supported from D3hot Dec 13 03:42:42.566628 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Dec 13 03:42:42.566686 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x96518000-0x96518fff 64bit] Dec 13 03:42:42.566728 kernel: pci 0000:00:16.4: PME# supported from D3hot Dec 13 03:42:42.566774 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Dec 13 03:42:42.566816 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x96510000-0x96511fff] Dec 13 03:42:42.566857 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x96517000-0x965170ff] Dec 13 03:42:42.566897 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Dec 13 03:42:42.566938 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Dec 13 03:42:42.566978 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Dec 13 03:42:42.567021 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x96516000-0x965167ff] Dec 13 03:42:42.567061 kernel: pci 0000:00:17.0: PME# supported from D3hot Dec 13 03:42:42.567109 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Dec 13 03:42:42.567151 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Dec 13 03:42:42.567199 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Dec 13 03:42:42.567241 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Dec 13 03:42:42.567287 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Dec 13 03:42:42.567328 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Dec 13 03:42:42.567374 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Dec 13 03:42:42.567416 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Dec 13 03:42:42.567462 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Dec 13 03:42:42.567505 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Dec 13 03:42:42.567549 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Dec 13 03:42:42.567621 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 03:42:42.567688 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Dec 13 03:42:42.567734 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Dec 13 03:42:42.567777 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x96514000-0x965140ff 64bit] Dec 13 03:42:42.567818 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Dec 13 03:42:42.567863 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Dec 13 03:42:42.567904 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Dec 13 03:42:42.567946 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 03:42:42.567993 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Dec 13 03:42:42.568036 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Dec 13 03:42:42.568082 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x96200000-0x962fffff pref] Dec 13 03:42:42.568125 kernel: pci 0000:02:00.0: PME# supported from D3cold Dec 13 03:42:42.568168 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Dec 13 03:42:42.568211 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 03:42:42.568258 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Dec 13 03:42:42.568302 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Dec 13 03:42:42.568344 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x96100000-0x961fffff pref] Dec 13 03:42:42.568388 kernel: pci 0000:02:00.1: PME# supported from D3cold Dec 13 03:42:42.568431 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Dec 13 03:42:42.568473 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 03:42:42.568515 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Dec 13 03:42:42.568557 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Dec 13 03:42:42.568650 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 03:42:42.568713 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Dec 13 03:42:42.568761 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Dec 13 03:42:42.568805 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Dec 13 03:42:42.568848 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x96400000-0x9647ffff] Dec 13 03:42:42.568889 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Dec 13 03:42:42.568932 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x96480000-0x96483fff] Dec 13 03:42:42.568975 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Dec 13 03:42:42.569017 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Dec 13 03:42:42.569059 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Dec 13 03:42:42.569102 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Dec 13 03:42:42.569152 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Dec 13 03:42:42.569196 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Dec 13 03:42:42.569239 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x96300000-0x9637ffff] Dec 13 03:42:42.569281 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Dec 13 03:42:42.569324 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x96380000-0x96383fff] Dec 13 03:42:42.569366 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Dec 13 03:42:42.569410 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Dec 13 03:42:42.569452 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Dec 13 03:42:42.569493 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Dec 13 03:42:42.569536 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Dec 13 03:42:42.569584 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Dec 13 03:42:42.569674 kernel: pci 0000:07:00.0: enabling Extended Tags Dec 13 03:42:42.569716 kernel: pci 0000:07:00.0: supports D1 D2 Dec 13 03:42:42.569761 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 03:42:42.569802 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Dec 13 03:42:42.569845 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Dec 13 03:42:42.569886 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Dec 13 03:42:42.569932 kernel: pci_bus 0000:08: extended config space not accessible Dec 13 03:42:42.569982 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Dec 13 03:42:42.570028 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x95000000-0x95ffffff] Dec 13 03:42:42.570074 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x96000000-0x9601ffff] Dec 13 03:42:42.570121 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Dec 13 03:42:42.570168 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 03:42:42.570212 kernel: pci 0000:08:00.0: supports D1 D2 Dec 13 03:42:42.570258 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 03:42:42.570301 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Dec 13 03:42:42.570344 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Dec 13 03:42:42.570387 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Dec 13 03:42:42.570396 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Dec 13 03:42:42.570402 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Dec 13 03:42:42.570407 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Dec 13 03:42:42.570412 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Dec 13 03:42:42.570417 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Dec 13 03:42:42.570422 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Dec 13 03:42:42.570428 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Dec 13 03:42:42.570433 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Dec 13 03:42:42.570438 kernel: iommu: Default domain type: Translated Dec 13 03:42:42.570444 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 03:42:42.570489 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Dec 13 03:42:42.570534 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 03:42:42.570581 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Dec 13 03:42:42.570611 kernel: vgaarb: loaded Dec 13 03:42:42.570616 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 03:42:42.570622 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 03:42:42.570627 kernel: PTP clock support registered Dec 13 03:42:42.570633 kernel: PCI: Using ACPI for IRQ routing Dec 13 03:42:42.570659 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 03:42:42.570664 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Dec 13 03:42:42.570669 kernel: e820: reserve RAM buffer [mem 0x6e2d9000-0x6fffffff] Dec 13 03:42:42.570674 kernel: e820: reserve RAM buffer [mem 0x77fc5000-0x77ffffff] Dec 13 03:42:42.570679 kernel: e820: reserve RAM buffer [mem 0x79231000-0x7bffffff] Dec 13 03:42:42.570684 kernel: e820: reserve RAM buffer [mem 0x7bf00000-0x7bffffff] Dec 13 03:42:42.570689 kernel: e820: reserve RAM buffer [mem 0x87f800000-0x87fffffff] Dec 13 03:42:42.570695 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 03:42:42.570700 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Dec 13 03:42:42.570706 kernel: clocksource: Switched to clocksource tsc-early Dec 13 03:42:42.570711 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 03:42:42.570716 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 03:42:42.570721 kernel: pnp: PnP ACPI init Dec 13 03:42:42.570765 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Dec 13 03:42:42.570809 kernel: pnp 00:02: [dma 0 disabled] Dec 13 03:42:42.570851 kernel: pnp 00:03: [dma 0 disabled] Dec 13 03:42:42.570893 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Dec 13 03:42:42.570931 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Dec 13 03:42:42.570972 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Dec 13 03:42:42.571013 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Dec 13 03:42:42.571051 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Dec 13 03:42:42.571089 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Dec 13 03:42:42.571127 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Dec 13 03:42:42.571165 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Dec 13 03:42:42.571202 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Dec 13 03:42:42.571239 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Dec 13 03:42:42.571276 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Dec 13 03:42:42.571316 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Dec 13 03:42:42.571354 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Dec 13 03:42:42.571393 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Dec 13 03:42:42.571430 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Dec 13 03:42:42.571466 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Dec 13 03:42:42.571504 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Dec 13 03:42:42.571541 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Dec 13 03:42:42.571584 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Dec 13 03:42:42.571592 kernel: pnp: PnP ACPI: found 10 devices Dec 13 03:42:42.571622 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 03:42:42.571629 kernel: NET: Registered PF_INET protocol family Dec 13 03:42:42.571634 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 03:42:42.571640 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 03:42:42.571664 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 03:42:42.571669 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 03:42:42.571675 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 03:42:42.571680 kernel: TCP: Hash tables configured (established 262144 bind 65536) Dec 13 03:42:42.571685 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 03:42:42.571691 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 03:42:42.571696 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 03:42:42.571702 kernel: NET: Registered PF_XDP protocol family Dec 13 03:42:42.571744 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7f800000-0x7f800fff 64bit] Dec 13 03:42:42.571785 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7f801000-0x7f801fff 64bit] Dec 13 03:42:42.571828 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7f802000-0x7f802fff 64bit] Dec 13 03:42:42.571869 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 03:42:42.571916 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Dec 13 03:42:42.571960 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Dec 13 03:42:42.572004 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Dec 13 03:42:42.572047 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Dec 13 03:42:42.572089 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Dec 13 03:42:42.572133 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Dec 13 03:42:42.572177 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 03:42:42.572219 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Dec 13 03:42:42.572262 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Dec 13 03:42:42.572306 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Dec 13 03:42:42.572348 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Dec 13 03:42:42.572390 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Dec 13 03:42:42.572432 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Dec 13 03:42:42.572474 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Dec 13 03:42:42.572516 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Dec 13 03:42:42.572560 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Dec 13 03:42:42.572631 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Dec 13 03:42:42.572694 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Dec 13 03:42:42.572736 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Dec 13 03:42:42.572778 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Dec 13 03:42:42.572821 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Dec 13 03:42:42.572858 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Dec 13 03:42:42.572895 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 03:42:42.572934 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 03:42:42.572970 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 03:42:42.573007 kernel: pci_bus 0000:00: resource 7 [mem 0x7f800000-0xdfffffff window] Dec 13 03:42:42.573043 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Dec 13 03:42:42.573089 kernel: pci_bus 0000:02: resource 1 [mem 0x96100000-0x962fffff] Dec 13 03:42:42.573128 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 03:42:42.573170 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Dec 13 03:42:42.573211 kernel: pci_bus 0000:04: resource 1 [mem 0x96400000-0x964fffff] Dec 13 03:42:42.573253 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 03:42:42.573292 kernel: pci_bus 0000:05: resource 1 [mem 0x96300000-0x963fffff] Dec 13 03:42:42.573333 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Dec 13 03:42:42.573372 kernel: pci_bus 0000:07: resource 1 [mem 0x95000000-0x960fffff] Dec 13 03:42:42.573413 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Dec 13 03:42:42.573456 kernel: pci_bus 0000:08: resource 1 [mem 0x95000000-0x960fffff] Dec 13 03:42:42.573463 kernel: PCI: CLS 64 bytes, default 64 Dec 13 03:42:42.573468 kernel: DMAR: No ATSR found Dec 13 03:42:42.573474 kernel: DMAR: No SATC found Dec 13 03:42:42.573479 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Dec 13 03:42:42.573484 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Dec 13 03:42:42.573489 kernel: DMAR: IOMMU feature nwfs inconsistent Dec 13 03:42:42.573495 kernel: DMAR: IOMMU feature pasid inconsistent Dec 13 03:42:42.573500 kernel: DMAR: IOMMU feature eafs inconsistent Dec 13 03:42:42.573505 kernel: DMAR: IOMMU feature prs inconsistent Dec 13 03:42:42.573511 kernel: DMAR: IOMMU feature nest inconsistent Dec 13 03:42:42.573517 kernel: DMAR: IOMMU feature mts inconsistent Dec 13 03:42:42.573522 kernel: DMAR: IOMMU feature sc_support inconsistent Dec 13 03:42:42.573527 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Dec 13 03:42:42.573532 kernel: DMAR: dmar0: Using Queued invalidation Dec 13 03:42:42.573537 kernel: DMAR: dmar1: Using Queued invalidation Dec 13 03:42:42.573581 kernel: pci 0000:00:00.0: Adding to iommu group 0 Dec 13 03:42:42.573666 kernel: pci 0000:00:01.0: Adding to iommu group 1 Dec 13 03:42:42.573711 kernel: pci 0000:00:01.1: Adding to iommu group 1 Dec 13 03:42:42.573752 kernel: pci 0000:00:02.0: Adding to iommu group 2 Dec 13 03:42:42.573794 kernel: pci 0000:00:08.0: Adding to iommu group 3 Dec 13 03:42:42.573835 kernel: pci 0000:00:12.0: Adding to iommu group 4 Dec 13 03:42:42.573877 kernel: pci 0000:00:14.0: Adding to iommu group 5 Dec 13 03:42:42.573918 kernel: pci 0000:00:14.2: Adding to iommu group 5 Dec 13 03:42:42.573959 kernel: pci 0000:00:15.0: Adding to iommu group 6 Dec 13 03:42:42.574000 kernel: pci 0000:00:15.1: Adding to iommu group 6 Dec 13 03:42:42.574041 kernel: pci 0000:00:16.0: Adding to iommu group 7 Dec 13 03:42:42.574084 kernel: pci 0000:00:16.1: Adding to iommu group 7 Dec 13 03:42:42.574125 kernel: pci 0000:00:16.4: Adding to iommu group 7 Dec 13 03:42:42.574168 kernel: pci 0000:00:17.0: Adding to iommu group 8 Dec 13 03:42:42.574209 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Dec 13 03:42:42.574251 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Dec 13 03:42:42.574292 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Dec 13 03:42:42.574333 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Dec 13 03:42:42.574375 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Dec 13 03:42:42.574418 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Dec 13 03:42:42.574459 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Dec 13 03:42:42.574500 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Dec 13 03:42:42.574541 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Dec 13 03:42:42.574586 kernel: pci 0000:02:00.0: Adding to iommu group 1 Dec 13 03:42:42.574671 kernel: pci 0000:02:00.1: Adding to iommu group 1 Dec 13 03:42:42.574714 kernel: pci 0000:04:00.0: Adding to iommu group 16 Dec 13 03:42:42.574757 kernel: pci 0000:05:00.0: Adding to iommu group 17 Dec 13 03:42:42.574802 kernel: pci 0000:07:00.0: Adding to iommu group 18 Dec 13 03:42:42.574848 kernel: pci 0000:08:00.0: Adding to iommu group 18 Dec 13 03:42:42.574856 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Dec 13 03:42:42.574861 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 03:42:42.574867 kernel: software IO TLB: mapped [mem 0x0000000073fc5000-0x0000000077fc5000] (64MB) Dec 13 03:42:42.574872 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Dec 13 03:42:42.574877 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Dec 13 03:42:42.574883 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Dec 13 03:42:42.574888 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Dec 13 03:42:42.574894 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Dec 13 03:42:42.574941 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Dec 13 03:42:42.574949 kernel: Initialise system trusted keyrings Dec 13 03:42:42.574954 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Dec 13 03:42:42.574960 kernel: Key type asymmetric registered Dec 13 03:42:42.574965 kernel: Asymmetric key parser 'x509' registered Dec 13 03:42:42.574970 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 03:42:42.574975 kernel: io scheduler mq-deadline registered Dec 13 03:42:42.574982 kernel: io scheduler kyber registered Dec 13 03:42:42.574987 kernel: io scheduler bfq registered Dec 13 03:42:42.575028 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Dec 13 03:42:42.575072 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Dec 13 03:42:42.575114 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Dec 13 03:42:42.575157 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Dec 13 03:42:42.575198 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Dec 13 03:42:42.575240 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Dec 13 03:42:42.575284 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Dec 13 03:42:42.575330 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Dec 13 03:42:42.575338 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Dec 13 03:42:42.575343 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Dec 13 03:42:42.575349 kernel: pstore: Registered erst as persistent store backend Dec 13 03:42:42.575354 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 03:42:42.575359 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 03:42:42.575365 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 03:42:42.575371 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 03:42:42.575413 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Dec 13 03:42:42.575421 kernel: i8042: PNP: No PS/2 controller found. Dec 13 03:42:42.575459 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Dec 13 03:42:42.575498 kernel: rtc_cmos rtc_cmos: registered as rtc0 Dec 13 03:42:42.575536 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-12-13T03:42:41 UTC (1734061361) Dec 13 03:42:42.575574 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Dec 13 03:42:42.575585 kernel: fail to initialize ptp_kvm Dec 13 03:42:42.575590 kernel: intel_pstate: Intel P-state driver initializing Dec 13 03:42:42.575621 kernel: intel_pstate: Disabling energy efficiency optimization Dec 13 03:42:42.575626 kernel: intel_pstate: HWP enabled Dec 13 03:42:42.575632 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Dec 13 03:42:42.575637 kernel: vesafb: scrolling: redraw Dec 13 03:42:42.575658 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Dec 13 03:42:42.575663 kernel: vesafb: framebuffer at 0x95000000, mapped to 0x000000005dfe3a3b, using 768k, total 768k Dec 13 03:42:42.575669 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 03:42:42.575675 kernel: fb0: VESA VGA frame buffer device Dec 13 03:42:42.575680 kernel: NET: Registered PF_INET6 protocol family Dec 13 03:42:42.575685 kernel: Segment Routing with IPv6 Dec 13 03:42:42.575691 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 03:42:42.575696 kernel: NET: Registered PF_PACKET protocol family Dec 13 03:42:42.575701 kernel: Key type dns_resolver registered Dec 13 03:42:42.575706 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Dec 13 03:42:42.575711 kernel: microcode: Microcode Update Driver: v2.2. Dec 13 03:42:42.575717 kernel: IPI shorthand broadcast: enabled Dec 13 03:42:42.575723 kernel: sched_clock: Marking stable (1850935405, 1360191773)->(4658205153, -1447077975) Dec 13 03:42:42.575728 kernel: registered taskstats version 1 Dec 13 03:42:42.575733 kernel: Loading compiled-in X.509 certificates Dec 13 03:42:42.575738 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 03:42:42.575744 kernel: Key type .fscrypt registered Dec 13 03:42:42.575749 kernel: Key type fscrypt-provisioning registered Dec 13 03:42:42.575754 kernel: pstore: Using crash dump compression: deflate Dec 13 03:42:42.575759 kernel: ima: Allocated hash algorithm: sha1 Dec 13 03:42:42.575764 kernel: ima: No architecture policies found Dec 13 03:42:42.575770 kernel: clk: Disabling unused clocks Dec 13 03:42:42.575776 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 03:42:42.575781 kernel: Write protecting the kernel read-only data: 28672k Dec 13 03:42:42.575786 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 03:42:42.575791 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 03:42:42.575797 kernel: Run /init as init process Dec 13 03:42:42.575802 kernel: with arguments: Dec 13 03:42:42.575807 kernel: /init Dec 13 03:42:42.575812 kernel: with environment: Dec 13 03:42:42.575818 kernel: HOME=/ Dec 13 03:42:42.575823 kernel: TERM=linux Dec 13 03:42:42.575829 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 03:42:42.575835 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 03:42:42.575841 systemd[1]: Detected architecture x86-64. Dec 13 03:42:42.575847 systemd[1]: Running in initrd. Dec 13 03:42:42.575852 systemd[1]: No hostname configured, using default hostname. Dec 13 03:42:42.575857 systemd[1]: Hostname set to . Dec 13 03:42:42.575863 systemd[1]: Initializing machine ID from random generator. Dec 13 03:42:42.575869 systemd[1]: Queued start job for default target initrd.target. Dec 13 03:42:42.575875 systemd[1]: Started systemd-ask-password-console.path. Dec 13 03:42:42.575880 systemd[1]: Reached target cryptsetup.target. Dec 13 03:42:42.575885 systemd[1]: Reached target paths.target. Dec 13 03:42:42.575890 systemd[1]: Reached target slices.target. Dec 13 03:42:42.575895 systemd[1]: Reached target swap.target. Dec 13 03:42:42.575901 systemd[1]: Reached target timers.target. Dec 13 03:42:42.575907 systemd[1]: Listening on iscsid.socket. Dec 13 03:42:42.575913 systemd[1]: Listening on iscsiuio.socket. Dec 13 03:42:42.575918 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 03:42:42.575923 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 03:42:42.575929 systemd[1]: Listening on systemd-journald.socket. Dec 13 03:42:42.575934 systemd[1]: Listening on systemd-networkd.socket. Dec 13 03:42:42.575940 kernel: tsc: Refined TSC clocksource calibration: 3408.017 MHz Dec 13 03:42:42.575946 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fe44c681, max_idle_ns: 440795269197 ns Dec 13 03:42:42.575951 kernel: clocksource: Switched to clocksource tsc Dec 13 03:42:42.575956 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 03:42:42.575962 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 03:42:42.575967 systemd[1]: Reached target sockets.target. Dec 13 03:42:42.575973 systemd[1]: Starting kmod-static-nodes.service... Dec 13 03:42:42.575978 systemd[1]: Finished network-cleanup.service. Dec 13 03:42:42.575984 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 03:42:42.575989 systemd[1]: Starting systemd-journald.service... Dec 13 03:42:42.575995 systemd[1]: Starting systemd-modules-load.service... Dec 13 03:42:42.576003 systemd-journald[269]: Journal started Dec 13 03:42:42.576027 systemd-journald[269]: Runtime Journal (/run/log/journal/26577fc743704535b2f83f7bbc02f020) is 8.0M, max 639.3M, 631.3M free. Dec 13 03:42:42.576860 systemd-modules-load[271]: Inserted module 'overlay' Dec 13 03:42:42.582000 audit: BPF prog-id=6 op=LOAD Dec 13 03:42:42.601628 kernel: audit: type=1334 audit(1734061362.582:2): prog-id=6 op=LOAD Dec 13 03:42:42.601643 systemd[1]: Starting systemd-resolved.service... Dec 13 03:42:42.650639 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 03:42:42.650668 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 03:42:42.683600 kernel: Bridge firewalling registered Dec 13 03:42:42.683633 systemd[1]: Started systemd-journald.service. Dec 13 03:42:42.698329 systemd-modules-load[271]: Inserted module 'br_netfilter' Dec 13 03:42:42.746327 kernel: audit: type=1130 audit(1734061362.705:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:42.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:42.700827 systemd-resolved[273]: Positive Trust Anchors: Dec 13 03:42:42.821761 kernel: SCSI subsystem initialized Dec 13 03:42:42.821773 kernel: audit: type=1130 audit(1734061362.758:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:42.821781 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 03:42:42.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:42.700833 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 03:42:42.921376 kernel: device-mapper: uevent: version 1.0.3 Dec 13 03:42:42.921387 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 03:42:42.921396 kernel: audit: type=1130 audit(1734061362.877:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:42.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:42.700852 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 03:42:43.017817 kernel: audit: type=1130 audit(1734061362.929:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:42.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:42.702402 systemd-resolved[273]: Defaulting to hostname 'linux'. Dec 13 03:42:43.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:42.706994 systemd[1]: Started systemd-resolved.service. Dec 13 03:42:43.133858 kernel: audit: type=1130 audit(1734061363.027:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:43.133875 kernel: audit: type=1130 audit(1734061363.080:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:43.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:42.759771 systemd[1]: Finished kmod-static-nodes.service. Dec 13 03:42:42.879343 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 03:42:42.921720 systemd-modules-load[271]: Inserted module 'dm_multipath' Dec 13 03:42:42.930025 systemd[1]: Finished systemd-modules-load.service. Dec 13 03:42:43.028060 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 03:42:43.081017 systemd[1]: Reached target nss-lookup.target. Dec 13 03:42:43.143252 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 03:42:43.150212 systemd[1]: Starting systemd-sysctl.service... Dec 13 03:42:43.163301 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 03:42:43.164027 systemd[1]: Finished systemd-sysctl.service. Dec 13 03:42:43.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:43.166115 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 03:42:43.282821 kernel: audit: type=1130 audit(1734061363.162:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:43.282834 kernel: audit: type=1130 audit(1734061363.225:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:43.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:43.226050 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 03:42:43.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:43.292180 systemd[1]: Starting dracut-cmdline.service... Dec 13 03:42:43.314690 dracut-cmdline[295]: dracut-dracut-053 Dec 13 03:42:43.314690 dracut-cmdline[295]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Dec 13 03:42:43.314690 dracut-cmdline[295]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 03:42:43.383668 kernel: Loading iSCSI transport class v2.0-870. Dec 13 03:42:43.383686 kernel: iscsi: registered transport (tcp) Dec 13 03:42:43.438779 kernel: iscsi: registered transport (qla4xxx) Dec 13 03:42:43.438797 kernel: QLogic iSCSI HBA Driver Dec 13 03:42:43.455162 systemd[1]: Finished dracut-cmdline.service. Dec 13 03:42:43.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:43.464306 systemd[1]: Starting dracut-pre-udev.service... Dec 13 03:42:43.519620 kernel: raid6: avx2x4 gen() 48823 MB/s Dec 13 03:42:43.554652 kernel: raid6: avx2x4 xor() 21862 MB/s Dec 13 03:42:43.589652 kernel: raid6: avx2x2 gen() 53761 MB/s Dec 13 03:42:43.624614 kernel: raid6: avx2x2 xor() 32147 MB/s Dec 13 03:42:43.659649 kernel: raid6: avx2x1 gen() 45213 MB/s Dec 13 03:42:43.694612 kernel: raid6: avx2x1 xor() 27930 MB/s Dec 13 03:42:43.727649 kernel: raid6: sse2x4 gen() 21372 MB/s Dec 13 03:42:43.761612 kernel: raid6: sse2x4 xor() 11985 MB/s Dec 13 03:42:43.795613 kernel: raid6: sse2x2 gen() 21647 MB/s Dec 13 03:42:43.829649 kernel: raid6: sse2x2 xor() 13370 MB/s Dec 13 03:42:43.863656 kernel: raid6: sse2x1 gen() 18294 MB/s Dec 13 03:42:43.915265 kernel: raid6: sse2x1 xor() 8934 MB/s Dec 13 03:42:43.915280 kernel: raid6: using algorithm avx2x2 gen() 53761 MB/s Dec 13 03:42:43.915288 kernel: raid6: .... xor() 32147 MB/s, rmw enabled Dec 13 03:42:43.933312 kernel: raid6: using avx2x2 recovery algorithm Dec 13 03:42:43.979647 kernel: xor: automatically using best checksumming function avx Dec 13 03:42:44.058588 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 03:42:44.063285 systemd[1]: Finished dracut-pre-udev.service. Dec 13 03:42:44.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:44.071000 audit: BPF prog-id=7 op=LOAD Dec 13 03:42:44.071000 audit: BPF prog-id=8 op=LOAD Dec 13 03:42:44.072460 systemd[1]: Starting systemd-udevd.service... Dec 13 03:42:44.080486 systemd-udevd[473]: Using default interface naming scheme 'v252'. Dec 13 03:42:44.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:44.085733 systemd[1]: Started systemd-udevd.service. Dec 13 03:42:44.126701 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Dec 13 03:42:44.103201 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 03:42:44.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:44.131626 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 03:42:44.144801 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 03:42:44.196899 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 03:42:44.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:44.223588 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 03:42:44.242658 kernel: ACPI: bus type USB registered Dec 13 03:42:44.242681 kernel: usbcore: registered new interface driver usbfs Dec 13 03:42:44.242689 kernel: usbcore: registered new interface driver hub Dec 13 03:42:44.242699 kernel: usbcore: registered new device driver usb Dec 13 03:42:44.302593 kernel: libata version 3.00 loaded. Dec 13 03:42:44.302642 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 03:42:44.319616 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Dec 13 03:42:44.922373 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Dec 13 03:42:44.922449 kernel: AES CTR mode by8 optimization enabled Dec 13 03:42:44.922460 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Dec 13 03:42:44.922468 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Dec 13 03:42:44.922477 kernel: ahci 0000:00:17.0: version 3.0 Dec 13 03:42:44.922545 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Dec 13 03:42:44.922611 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Dec 13 03:42:44.922671 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Dec 13 03:42:44.922728 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Dec 13 03:42:44.922784 kernel: pps pps0: new PPS source ptp0 Dec 13 03:42:44.922854 kernel: igb 0000:04:00.0: added PHC on eth0 Dec 13 03:42:44.922919 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 03:42:44.922980 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:08:6a Dec 13 03:42:44.923040 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Dec 13 03:42:44.923099 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Dec 13 03:42:44.923158 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Dec 13 03:42:44.923215 kernel: pps pps1: new PPS source ptp1 Dec 13 03:42:44.923276 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Dec 13 03:42:44.923333 kernel: scsi host0: ahci Dec 13 03:42:44.923403 kernel: scsi host1: ahci Dec 13 03:42:44.923467 kernel: scsi host2: ahci Dec 13 03:42:44.923529 kernel: scsi host3: ahci Dec 13 03:42:44.923591 kernel: scsi host4: ahci Dec 13 03:42:44.923652 kernel: scsi host5: ahci Dec 13 03:42:44.923715 kernel: scsi host6: ahci Dec 13 03:42:44.923778 kernel: scsi host7: ahci Dec 13 03:42:44.923839 kernel: ata1: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516100 irq 134 Dec 13 03:42:44.923849 kernel: ata2: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516180 irq 134 Dec 13 03:42:44.923857 kernel: ata3: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516200 irq 134 Dec 13 03:42:44.923865 kernel: ata4: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516280 irq 134 Dec 13 03:42:44.923873 kernel: ata5: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516300 irq 134 Dec 13 03:42:44.923882 kernel: ata6: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516380 irq 134 Dec 13 03:42:44.923890 kernel: ata7: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516400 irq 134 Dec 13 03:42:44.923899 kernel: ata8: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516480 irq 134 Dec 13 03:42:44.923907 kernel: igb 0000:05:00.0: added PHC on eth1 Dec 13 03:42:44.923971 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Dec 13 03:42:44.924029 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 03:42:44.924088 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Dec 13 03:42:44.924145 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:08:6b Dec 13 03:42:44.924204 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Dec 13 03:42:44.924263 kernel: hub 1-0:1.0: USB hub found Dec 13 03:42:44.924332 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Dec 13 03:42:44.924393 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Dec 13 03:42:44.924451 kernel: hub 1-0:1.0: 16 ports detected Dec 13 03:42:44.924514 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 03:42:44.924573 kernel: hub 2-0:1.0: USB hub found Dec 13 03:42:44.924646 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 13 03:42:44.924655 kernel: hub 2-0:1.0: 10 ports detected Dec 13 03:42:44.924719 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 03:42:44.924728 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Dec 13 03:42:44.924790 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 13 03:42:44.924799 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Dec 13 03:42:45.629656 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 03:42:45.629669 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Dec 13 03:42:45.629742 kernel: ata7: SATA link down (SStatus 0 SControl 300) Dec 13 03:42:45.629751 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Dec 13 03:42:45.651957 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Dec 13 03:42:45.651982 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Dec 13 03:42:45.652001 kernel: ata8: SATA link down (SStatus 0 SControl 300) Dec 13 03:42:45.652020 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 03:42:45.652038 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 03:42:45.652057 kernel: hub 1-14:1.0: USB hub found Dec 13 03:42:45.652302 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Dec 13 03:42:45.652331 kernel: hub 1-14:1.0: 4 ports detected Dec 13 03:42:45.652544 kernel: ata2.00: Features: NCQ-prio Dec 13 03:42:45.652565 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Dec 13 03:42:45.652767 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Dec 13 03:42:45.652783 kernel: port_module: 9 callbacks suppressed Dec 13 03:42:45.652805 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Dec 13 03:42:45.653012 kernel: ata1.00: Features: NCQ-prio Dec 13 03:42:45.653035 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 03:42:45.653251 kernel: ata2.00: configured for UDMA/133 Dec 13 03:42:45.653267 kernel: ata1.00: configured for UDMA/133 Dec 13 03:42:45.653285 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Dec 13 03:42:45.842228 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Dec 13 03:42:45.842372 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Dec 13 03:42:45.842481 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 03:42:45.842489 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 03:42:45.842498 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Dec 13 03:42:45.842558 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Dec 13 03:42:45.842626 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Dec 13 03:42:45.842689 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Dec 13 03:42:45.842743 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Dec 13 03:42:45.842815 kernel: sd 0:0:0:0: [sdb] Write Protect is off Dec 13 03:42:45.842889 kernel: sd 1:0:0:0: [sda] Write Protect is off Dec 13 03:42:45.842945 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Dec 13 03:42:45.843050 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Dec 13 03:42:45.843110 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Dec 13 03:42:45.843168 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Dec 13 03:42:45.843224 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 03:42:45.843281 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 03:42:45.843340 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 03:42:45.843347 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 03:42:45.843355 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 03:42:45.843362 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 03:42:45.843368 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Dec 13 03:42:45.843424 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 03:42:45.843431 kernel: GPT:9289727 != 937703087 Dec 13 03:42:45.843438 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 03:42:45.843444 kernel: GPT:9289727 != 937703087 Dec 13 03:42:45.843450 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 03:42:45.843457 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Dec 13 03:42:45.843465 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 03:42:45.843471 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Dec 13 03:42:45.859585 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth2 Dec 13 03:42:45.868708 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 03:42:45.963638 kernel: usbcore: registered new interface driver usbhid Dec 13 03:42:45.963652 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (667) Dec 13 03:42:45.963663 kernel: usbhid: USB HID core driver Dec 13 03:42:45.963670 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Dec 13 03:42:45.963677 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth0 Dec 13 03:42:45.940350 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 03:42:45.975993 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 03:42:46.010656 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 03:42:46.088601 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Dec 13 03:42:46.088686 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Dec 13 03:42:46.088695 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Dec 13 03:42:46.057913 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 03:42:46.102276 systemd[1]: Starting disk-uuid.service... Dec 13 03:42:46.140725 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 03:42:46.140761 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Dec 13 03:42:46.140913 disk-uuid[692]: Primary Header is updated. Dec 13 03:42:46.140913 disk-uuid[692]: Secondary Entries is updated. Dec 13 03:42:46.140913 disk-uuid[692]: Secondary Header is updated. Dec 13 03:42:46.194629 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 03:42:46.194654 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Dec 13 03:42:47.168775 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 03:42:47.186638 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Dec 13 03:42:47.186740 disk-uuid[693]: The operation has completed successfully. Dec 13 03:42:47.220965 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 03:42:47.314303 kernel: audit: type=1130 audit(1734061367.227:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:47.314319 kernel: audit: type=1131 audit(1734061367.227:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:47.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:47.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:47.221008 systemd[1]: Finished disk-uuid.service. Dec 13 03:42:47.347775 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 03:42:47.233226 systemd[1]: Starting verity-setup.service... Dec 13 03:42:47.374964 systemd[1]: Found device dev-mapper-usr.device. Dec 13 03:42:47.384730 systemd[1]: Mounting sysusr-usr.mount... Dec 13 03:42:47.398981 systemd[1]: Finished verity-setup.service. Dec 13 03:42:47.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:47.460584 kernel: audit: type=1130 audit(1734061367.412:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:47.514074 systemd[1]: Mounted sysusr-usr.mount. Dec 13 03:42:47.528679 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 03:42:47.520953 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 03:42:47.609117 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Dec 13 03:42:47.609131 kernel: BTRFS info (device sdb6): using free space tree Dec 13 03:42:47.609138 kernel: BTRFS info (device sdb6): has skinny extents Dec 13 03:42:47.609145 kernel: BTRFS info (device sdb6): enabling ssd optimizations Dec 13 03:42:47.521349 systemd[1]: Starting ignition-setup.service... Dec 13 03:42:47.544027 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 03:42:47.680668 kernel: audit: type=1130 audit(1734061367.633:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:47.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:47.618052 systemd[1]: Finished ignition-setup.service. Dec 13 03:42:47.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:47.633941 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 03:42:47.769720 kernel: audit: type=1130 audit(1734061367.688:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:47.769740 kernel: audit: type=1334 audit(1734061367.745:24): prog-id=9 op=LOAD Dec 13 03:42:47.745000 audit: BPF prog-id=9 op=LOAD Dec 13 03:42:47.689283 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 03:42:47.748304 systemd[1]: Starting systemd-networkd.service... Dec 13 03:42:47.785795 systemd-networkd[874]: lo: Link UP Dec 13 03:42:47.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:47.785797 systemd-networkd[874]: lo: Gained carrier Dec 13 03:42:47.868808 kernel: audit: type=1130 audit(1734061367.799:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:47.786094 systemd-networkd[874]: Enumeration completed Dec 13 03:42:47.786141 systemd[1]: Started systemd-networkd.service. Dec 13 03:42:47.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:47.786733 systemd-networkd[874]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:42:47.960827 kernel: audit: type=1130 audit(1734061367.890:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:47.960840 iscsid[884]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 03:42:47.960840 iscsid[884]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 03:42:47.960840 iscsid[884]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 03:42:47.960840 iscsid[884]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 03:42:47.960840 iscsid[884]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 03:42:47.960840 iscsid[884]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 03:42:47.960840 iscsid[884]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 03:42:48.182838 kernel: audit: type=1130 audit(1734061367.966:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:48.182919 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Dec 13 03:42:48.183312 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f1np1: link becomes ready Dec 13 03:42:47.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:48.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:48.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:47.801045 systemd[1]: Reached target network.target. Dec 13 03:42:47.993565 ignition[869]: Ignition 2.14.0 Dec 13 03:42:47.861245 systemd[1]: Starting iscsiuio.service... Dec 13 03:42:47.993569 ignition[869]: Stage: fetch-offline Dec 13 03:42:47.876815 systemd[1]: Started iscsiuio.service. Dec 13 03:42:47.993596 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:42:48.275689 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Dec 13 03:42:48.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:47.891280 systemd[1]: Starting iscsid.service... Dec 13 03:42:47.993611 ignition[869]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 03:42:47.947801 systemd[1]: Started iscsid.service. Dec 13 03:42:48.004256 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 03:42:47.968300 systemd[1]: Starting dracut-initqueue.service... Dec 13 03:42:48.004319 ignition[869]: parsed url from cmdline: "" Dec 13 03:42:48.005353 unknown[869]: fetched base config from "system" Dec 13 03:42:48.004321 ignition[869]: no config URL provided Dec 13 03:42:48.005356 unknown[869]: fetched user config from "system" Dec 13 03:42:48.004324 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 03:42:48.042497 systemd-networkd[874]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:42:48.004336 ignition[869]: parsing config with SHA512: 1710598fb1094087fe82b2c685d814ba3f1c5ed2199f30e3969147ffc56dc3950ea7a35ff75856c0ccc967f190dad1f807709c79f6c471e995baab37c0736f7a Dec 13 03:42:48.054840 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 03:42:48.005537 ignition[869]: fetch-offline: fetch-offline passed Dec 13 03:42:48.118749 systemd[1]: Finished dracut-initqueue.service. Dec 13 03:42:48.005540 ignition[869]: POST message to Packet Timeline Dec 13 03:42:48.127126 systemd[1]: Reached target remote-fs-pre.target. Dec 13 03:42:48.005544 ignition[869]: POST Status error: resource requires networking Dec 13 03:42:48.144001 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 03:42:48.005584 ignition[869]: Ignition finished successfully Dec 13 03:42:48.170850 systemd[1]: Reached target remote-fs.target. Dec 13 03:42:48.224125 ignition[903]: Ignition 2.14.0 Dec 13 03:42:48.192823 systemd[1]: Starting dracut-pre-mount.service... Dec 13 03:42:48.224129 ignition[903]: Stage: kargs Dec 13 03:42:48.217784 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 03:42:48.224204 ignition[903]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:42:48.218225 systemd[1]: Starting ignition-kargs.service... Dec 13 03:42:48.224216 ignition[903]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 03:42:48.231933 systemd[1]: Finished dracut-pre-mount.service. Dec 13 03:42:48.226074 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 03:42:48.267071 systemd-networkd[874]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:42:48.227777 ignition[903]: kargs: kargs passed Dec 13 03:42:48.296287 systemd-networkd[874]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:42:48.227782 ignition[903]: POST message to Packet Timeline Dec 13 03:42:48.326119 systemd-networkd[874]: enp2s0f1np1: Link UP Dec 13 03:42:48.227797 ignition[903]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 03:42:48.326461 systemd-networkd[874]: enp2s0f1np1: Gained carrier Dec 13 03:42:48.230352 ignition[903]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59237->[::1]:53: read: connection refused Dec 13 03:42:48.342106 systemd-networkd[874]: enp2s0f0np0: Link UP Dec 13 03:42:48.431040 ignition[903]: GET https://metadata.packet.net/metadata: attempt #2 Dec 13 03:42:48.342488 systemd-networkd[874]: eno2: Link UP Dec 13 03:42:48.432427 ignition[903]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43523->[::1]:53: read: connection refused Dec 13 03:42:48.342870 systemd-networkd[874]: eno1: Link UP Dec 13 03:42:48.833631 ignition[903]: GET https://metadata.packet.net/metadata: attempt #3 Dec 13 03:42:48.834863 ignition[903]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50605->[::1]:53: read: connection refused Dec 13 03:42:49.079154 systemd-networkd[874]: enp2s0f0np0: Gained carrier Dec 13 03:42:49.088835 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0np0: link becomes ready Dec 13 03:42:49.162793 systemd-networkd[874]: enp2s0f0np0: DHCPv4 address 145.40.90.151/31, gateway 145.40.90.150 acquired from 145.40.83.140 Dec 13 03:42:49.376063 systemd-networkd[874]: enp2s0f1np1: Gained IPv6LL Dec 13 03:42:49.635268 ignition[903]: GET https://metadata.packet.net/metadata: attempt #4 Dec 13 03:42:49.636529 ignition[903]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49954->[::1]:53: read: connection refused Dec 13 03:42:51.040052 systemd-networkd[874]: enp2s0f0np0: Gained IPv6LL Dec 13 03:42:51.237840 ignition[903]: GET https://metadata.packet.net/metadata: attempt #5 Dec 13 03:42:51.239218 ignition[903]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53734->[::1]:53: read: connection refused Dec 13 03:42:54.442532 ignition[903]: GET https://metadata.packet.net/metadata: attempt #6 Dec 13 03:42:55.263081 ignition[903]: GET result: OK Dec 13 03:42:55.589769 ignition[903]: Ignition finished successfully Dec 13 03:42:55.592023 systemd[1]: Finished ignition-kargs.service. Dec 13 03:42:55.681332 kernel: kauditd_printk_skb: 3 callbacks suppressed Dec 13 03:42:55.681348 kernel: audit: type=1130 audit(1734061375.604:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:55.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:55.614083 ignition[916]: Ignition 2.14.0 Dec 13 03:42:55.607122 systemd[1]: Starting ignition-disks.service... Dec 13 03:42:55.614086 ignition[916]: Stage: disks Dec 13 03:42:55.614162 ignition[916]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:42:55.614171 ignition[916]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 03:42:55.616228 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 03:42:55.616686 ignition[916]: disks: disks passed Dec 13 03:42:55.616689 ignition[916]: POST message to Packet Timeline Dec 13 03:42:55.616699 ignition[916]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 03:42:56.189278 ignition[916]: GET result: OK Dec 13 03:42:56.590272 ignition[916]: Ignition finished successfully Dec 13 03:42:56.591753 systemd[1]: Finished ignition-disks.service. Dec 13 03:42:56.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:56.606180 systemd[1]: Reached target initrd-root-device.target. Dec 13 03:42:56.685856 kernel: audit: type=1130 audit(1734061376.605:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:56.670839 systemd[1]: Reached target local-fs-pre.target. Dec 13 03:42:56.670871 systemd[1]: Reached target local-fs.target. Dec 13 03:42:56.694849 systemd[1]: Reached target sysinit.target. Dec 13 03:42:56.714810 systemd[1]: Reached target basic.target. Dec 13 03:42:56.728523 systemd[1]: Starting systemd-fsck-root.service... Dec 13 03:42:56.752153 systemd-fsck[933]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 03:42:56.764189 systemd[1]: Finished systemd-fsck-root.service. Dec 13 03:42:56.856487 kernel: audit: type=1130 audit(1734061376.771:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:56.856503 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 03:42:56.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:56.774616 systemd[1]: Mounting sysroot.mount... Dec 13 03:42:56.864266 systemd[1]: Mounted sysroot.mount. Dec 13 03:42:56.878827 systemd[1]: Reached target initrd-root-fs.target. Dec 13 03:42:56.885567 systemd[1]: Mounting sysroot-usr.mount... Dec 13 03:42:56.910436 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 03:42:56.919127 systemd[1]: Starting flatcar-static-network.service... Dec 13 03:42:56.926748 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 03:42:56.926775 systemd[1]: Reached target ignition-diskful.target. Dec 13 03:42:56.951687 systemd[1]: Mounted sysroot-usr.mount. Dec 13 03:42:56.975054 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 03:42:57.117816 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (945) Dec 13 03:42:57.117837 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Dec 13 03:42:57.117845 kernel: BTRFS info (device sdb6): using free space tree Dec 13 03:42:57.117853 kernel: BTRFS info (device sdb6): has skinny extents Dec 13 03:42:57.117860 kernel: BTRFS info (device sdb6): enabling ssd optimizations Dec 13 03:42:56.988454 systemd[1]: Starting initrd-setup-root.service... Dec 13 03:42:57.179632 kernel: audit: type=1130 audit(1734061377.126:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:57.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:57.179673 coreos-metadata[941]: Dec 13 03:42:57.096 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 03:42:57.201844 coreos-metadata[942]: Dec 13 03:42:57.096 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 03:42:57.221631 initrd-setup-root[952]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 03:42:57.086009 systemd[1]: Finished initrd-setup-root.service. Dec 13 03:42:57.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:57.275854 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Dec 13 03:42:57.314818 kernel: audit: type=1130 audit(1734061377.248:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:57.127903 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 03:42:57.323852 initrd-setup-root[984]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 03:42:57.188237 systemd[1]: Starting ignition-mount.service... Dec 13 03:42:57.340832 initrd-setup-root[994]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 03:42:57.209174 systemd[1]: Starting sysroot-boot.service... Dec 13 03:42:57.357756 ignition[1018]: INFO : Ignition 2.14.0 Dec 13 03:42:57.357756 ignition[1018]: INFO : Stage: mount Dec 13 03:42:57.357756 ignition[1018]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:42:57.357756 ignition[1018]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 03:42:57.357756 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 03:42:57.357756 ignition[1018]: INFO : mount: mount passed Dec 13 03:42:57.357756 ignition[1018]: INFO : POST message to Packet Timeline Dec 13 03:42:57.357756 ignition[1018]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 03:42:57.230643 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 03:42:57.449832 coreos-metadata[941]: Dec 13 03:42:57.419 INFO Fetch successful Dec 13 03:42:57.230696 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 03:42:57.231480 systemd[1]: Finished sysroot-boot.service. Dec 13 03:42:57.493602 coreos-metadata[941]: Dec 13 03:42:57.493 INFO wrote hostname ci-3510.3.6-a-4c4d6acc59 to /sysroot/etc/hostname Dec 13 03:42:57.494043 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 03:42:57.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:57.578636 kernel: audit: type=1130 audit(1734061377.522:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:57.671208 coreos-metadata[942]: Dec 13 03:42:57.671 INFO Fetch successful Dec 13 03:42:57.696561 systemd[1]: flatcar-static-network.service: Deactivated successfully. Dec 13 03:42:57.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:57.734985 ignition[1018]: INFO : GET result: OK Dec 13 03:42:57.835807 kernel: audit: type=1130 audit(1734061377.704:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:57.835821 kernel: audit: type=1131 audit(1734061377.704:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:57.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:57.696632 systemd[1]: Finished flatcar-static-network.service. Dec 13 03:42:58.156809 ignition[1018]: INFO : Ignition finished successfully Dec 13 03:42:58.157766 systemd[1]: Finished ignition-mount.service. Dec 13 03:42:58.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:58.174895 systemd[1]: Starting ignition-files.service... Dec 13 03:42:58.243775 kernel: audit: type=1130 audit(1734061378.173:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:42:58.238389 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 03:42:58.299874 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1034) Dec 13 03:42:58.299889 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Dec 13 03:42:58.299897 kernel: BTRFS info (device sdb6): using free space tree Dec 13 03:42:58.322952 kernel: BTRFS info (device sdb6): has skinny extents Dec 13 03:42:58.371620 kernel: BTRFS info (device sdb6): enabling ssd optimizations Dec 13 03:42:58.373334 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 03:42:58.391740 ignition[1053]: INFO : Ignition 2.14.0 Dec 13 03:42:58.391740 ignition[1053]: INFO : Stage: files Dec 13 03:42:58.391740 ignition[1053]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:42:58.391740 ignition[1053]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 03:42:58.391740 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 03:42:58.391740 ignition[1053]: DEBUG : files: compiled without relabeling support, skipping Dec 13 03:42:58.391740 ignition[1053]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 03:42:58.391740 ignition[1053]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 03:42:58.507759 kernel: BTRFS info: devid 1 device path /dev/sdb6 changed to /dev/disk/by-label/OEM scanned by ignition (1061) Dec 13 03:42:58.394252 unknown[1053]: wrote ssh authorized keys file for user: core Dec 13 03:42:58.516807 ignition[1053]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 03:42:58.516807 ignition[1053]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 03:42:58.516807 ignition[1053]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 03:42:58.516807 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 03:42:58.516807 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 03:42:58.516807 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 03:42:58.516807 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 03:42:58.516807 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 03:42:58.516807 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 03:42:58.516807 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Dec 13 03:42:58.516807 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 03:42:58.516807 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2208630142" Dec 13 03:42:58.516807 ignition[1053]: CRITICAL : files: createFilesystemsFiles: createFiles: op(6): op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2208630142": device or resource busy Dec 13 03:42:58.516807 ignition[1053]: ERROR : files: createFilesystemsFiles: createFiles: op(6): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2208630142", trying btrfs: device or resource busy Dec 13 03:42:58.516807 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2208630142" Dec 13 03:42:58.769918 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2208630142" Dec 13 03:42:58.769918 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [started] unmounting "/mnt/oem2208630142" Dec 13 03:42:58.769918 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [finished] unmounting "/mnt/oem2208630142" Dec 13 03:42:58.769918 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Dec 13 03:42:58.769918 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 03:42:58.769918 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 03:42:58.976991 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 03:42:59.195793 ignition[1053]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 03:42:59.195793 ignition[1053]: INFO : files: op(b): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 03:42:59.195793 ignition[1053]: INFO : files: op(b): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 03:42:59.195793 ignition[1053]: INFO : files: op(c): [started] processing unit "packet-phone-home.service" Dec 13 03:42:59.195793 ignition[1053]: INFO : files: op(c): [finished] processing unit "packet-phone-home.service" Dec 13 03:42:59.195793 ignition[1053]: INFO : files: op(d): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 03:42:59.276888 ignition[1053]: INFO : files: op(d): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 03:42:59.276888 ignition[1053]: INFO : files: op(e): [started] setting preset to enabled for "packet-phone-home.service" Dec 13 03:42:59.276888 ignition[1053]: INFO : files: op(e): [finished] setting preset to enabled for "packet-phone-home.service" Dec 13 03:42:59.276888 ignition[1053]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 03:42:59.276888 ignition[1053]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 03:42:59.276888 ignition[1053]: INFO : files: files passed Dec 13 03:42:59.276888 ignition[1053]: INFO : POST message to Packet Timeline Dec 13 03:42:59.276888 ignition[1053]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 03:42:59.978304 ignition[1053]: INFO : GET result: OK Dec 13 03:43:00.262789 ignition[1053]: INFO : Ignition finished successfully Dec 13 03:43:00.265617 systemd[1]: Finished ignition-files.service. Dec 13 03:43:00.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:00.285163 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 03:43:00.355859 kernel: audit: type=1130 audit(1734061380.277:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:00.345852 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 03:43:00.379780 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 03:43:00.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:00.346232 systemd[1]: Starting ignition-quench.service... Dec 13 03:43:00.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:00.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:00.363041 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 03:43:00.389947 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 03:43:00.390000 systemd[1]: Finished ignition-quench.service. Dec 13 03:43:00.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:00.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:00.411931 systemd[1]: Reached target ignition-complete.target. Dec 13 03:43:00.428324 systemd[1]: Starting initrd-parse-etc.service... Dec 13 03:43:00.439890 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 03:43:00.439935 systemd[1]: Finished initrd-parse-etc.service. Dec 13 03:43:00.456924 systemd[1]: Reached target initrd-fs.target. Dec 13 03:43:00.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:00.471743 systemd[1]: Reached target initrd.target. Dec 13 03:43:00.485903 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 03:43:00.486880 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 03:43:00.514556 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 03:43:00.526935 systemd[1]: Starting initrd-cleanup.service... Dec 13 03:43:00.554631 systemd[1]: Stopped target nss-lookup.target. Dec 13 03:43:00.708168 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 13 03:43:00.708184 kernel: audit: type=1131 audit(1734061380.609:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:00.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:00.565088 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 03:43:00.581255 systemd[1]: Stopped target timers.target. Dec 13 03:43:00.595176 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 03:43:00.595550 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 03:43:00.611476 systemd[1]: Stopped target initrd.target. Dec 13 03:43:00.715836 systemd[1]: Stopped target basic.target. Dec 13 03:43:00.732884 systemd[1]: Stopped target ignition-complete.target. Dec 13 03:43:00.756884 systemd[1]: Stopped target ignition-diskful.target. Dec 13 03:43:00.772890 systemd[1]: Stopped target initrd-root-device.target. Dec 13 03:43:00.787925 systemd[1]: Stopped target remote-fs.target. Dec 13 03:43:00.804232 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 03:43:00.822325 systemd[1]: Stopped target sysinit.target. Dec 13 03:43:00.837134 systemd[1]: Stopped target local-fs.target. Dec 13 03:43:00.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:00.853187 systemd[1]: Stopped target local-fs-pre.target. Dec 13 03:43:00.984808 kernel: audit: type=1131 audit(1734061380.900:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:00.870180 systemd[1]: Stopped target swap.target. Dec 13 03:43:01.050690 kernel: audit: type=1131 audit(1734061380.992:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:00.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:00.886059 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 03:43:01.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:00.886433 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 03:43:01.131812 kernel: audit: type=1131 audit(1734061381.058:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:00.902414 systemd[1]: Stopped target cryptsetup.target. Dec 13 03:43:00.976865 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 03:43:00.976953 systemd[1]: Stopped dracut-initqueue.service. Dec 13 03:43:00.992940 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 03:43:00.993020 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 03:43:01.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:01.059040 systemd[1]: Stopped target paths.target. Dec 13 03:43:01.324567 kernel: audit: type=1131 audit(1734061381.195:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:01.324588 kernel: audit: type=1131 audit(1734061381.265:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:01.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:01.124812 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 03:43:01.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:01.130807 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 03:43:01.411830 kernel: audit: type=1131 audit(1734061381.332:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:01.411846 ignition[1099]: INFO : Ignition 2.14.0 Dec 13 03:43:01.411846 ignition[1099]: INFO : Stage: umount Dec 13 03:43:01.411846 ignition[1099]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:43:01.411846 ignition[1099]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 03:43:01.411846 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 03:43:01.411846 ignition[1099]: INFO : umount: umount passed Dec 13 03:43:01.411846 ignition[1099]: INFO : POST message to Packet Timeline Dec 13 03:43:01.411846 ignition[1099]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 03:43:01.692825 kernel: audit: type=1131 audit(1734061381.439:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:01.692842 kernel: audit: type=1131 audit(1734061381.546:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:01.692850 kernel: audit: type=1131 audit(1734061381.615:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:01.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:01.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:01.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:01.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:01.131909 systemd[1]: Stopped target slices.target. Dec 13 03:43:01.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:01.706845 iscsid[884]: iscsid shutting down. Dec 13 03:43:01.152883 systemd[1]: Stopped target sockets.target. Dec 13 03:43:01.175905 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 03:43:01.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:01.176016 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 03:43:01.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:01.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:01.195970 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 03:43:01.196108 systemd[1]: Stopped ignition-files.service. Dec 13 03:43:01.265941 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 03:43:01.266015 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 03:43:01.333531 systemd[1]: Stopping ignition-mount.service... Dec 13 03:43:01.401971 systemd[1]: Stopping iscsid.service... Dec 13 03:43:01.419814 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 03:43:01.419885 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 03:43:01.440473 systemd[1]: Stopping sysroot-boot.service... Dec 13 03:43:01.508799 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 03:43:01.508873 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 03:43:01.546903 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 03:43:01.546990 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 03:43:01.617325 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 03:43:01.617703 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 03:43:01.617747 systemd[1]: Stopped iscsid.service. Dec 13 03:43:01.683034 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 03:43:01.683072 systemd[1]: Stopped sysroot-boot.service. Dec 13 03:43:01.700012 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 03:43:01.700056 systemd[1]: Closed iscsid.socket. Dec 13 03:43:01.714100 systemd[1]: Stopping iscsiuio.service... Dec 13 03:43:01.729337 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 03:43:01.729599 systemd[1]: Stopped iscsiuio.service. Dec 13 03:43:01.744541 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 03:43:01.744794 systemd[1]: Finished initrd-cleanup.service. Dec 13 03:43:01.763798 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 03:43:01.763883 systemd[1]: Closed iscsiuio.socket. Dec 13 03:43:02.043832 ignition[1099]: INFO : GET result: OK Dec 13 03:43:02.487684 ignition[1099]: INFO : Ignition finished successfully Dec 13 03:43:02.490297 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 03:43:02.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:02.490532 systemd[1]: Stopped ignition-mount.service. Dec 13 03:43:02.505187 systemd[1]: Stopped target network.target. Dec 13 03:43:02.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:02.521819 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 03:43:02.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:02.521979 systemd[1]: Stopped ignition-disks.service. Dec 13 03:43:02.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:02.536953 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 03:43:02.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:02.537097 systemd[1]: Stopped ignition-kargs.service. Dec 13 03:43:02.552000 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 03:43:02.552152 systemd[1]: Stopped ignition-setup.service. Dec 13 03:43:02.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:02.569997 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 03:43:02.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:02.648000 audit: BPF prog-id=6 op=UNLOAD Dec 13 03:43:02.570143 systemd[1]: Stopped initrd-setup-root.service. Dec 13 03:43:02.585263 systemd[1]: Stopping systemd-networkd.service... Dec 13 03:43:02.590717 systemd-networkd[874]: enp2s0f0np0: DHCPv6 lease lost Dec 13 03:43:02.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:02.598791 systemd-networkd[874]: enp2s0f1np1: DHCPv6 lease lost Dec 13 03:43:02.703000 audit: BPF prog-id=9 op=UNLOAD Dec 13 03:43:02.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:02.600004 systemd[1]: Stopping systemd-resolved.service... Dec 13 03:43:02.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:02.615387 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 03:43:02.615643 systemd[1]: Stopped systemd-resolved.service. Dec 13 03:43:02.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:02.633505 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 03:43:02.633763 systemd[1]: Stopped systemd-networkd.service. Dec 13 03:43:02.648230 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 03:43:02.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:02.648321 systemd[1]: Closed systemd-networkd.socket. Dec 13 03:43:02.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:02.667125 systemd[1]: Stopping network-cleanup.service... Dec 13 03:43:02.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:02.673806 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 03:43:02.673837 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 03:43:02.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:02.695886 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 03:43:02.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:02.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:02.695954 systemd[1]: Stopped systemd-sysctl.service. Dec 13 03:43:02.712154 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 03:43:02.712252 systemd[1]: Stopped systemd-modules-load.service. Dec 13 03:43:02.729318 systemd[1]: Stopping systemd-udevd.service... Dec 13 03:43:02.747695 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 03:43:02.748795 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 03:43:02.748854 systemd[1]: Stopped systemd-udevd.service. Dec 13 03:43:02.753938 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 03:43:02.753964 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 03:43:02.773785 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 03:43:02.773812 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 03:43:02.789765 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 03:43:02.789821 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 03:43:02.805954 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 03:43:03.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:02.806052 systemd[1]: Stopped dracut-cmdline.service. Dec 13 03:43:02.821678 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 03:43:02.821703 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 03:43:02.838079 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 03:43:02.853673 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 03:43:02.853732 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 03:43:02.872441 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 03:43:02.872625 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 03:43:03.008649 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 03:43:03.008905 systemd[1]: Stopped network-cleanup.service. Dec 13 03:43:03.019127 systemd[1]: Reached target initrd-switch-root.target. Dec 13 03:43:03.036544 systemd[1]: Starting initrd-switch-root.service... Dec 13 03:43:03.057070 systemd[1]: Switching root. Dec 13 03:43:03.098984 systemd-journald[269]: Journal stopped Dec 13 03:43:07.097330 systemd-journald[269]: Received SIGTERM from PID 1 (n/a). Dec 13 03:43:07.097345 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 03:43:07.097352 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 03:43:07.097358 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 03:43:07.097362 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 03:43:07.097368 kernel: SELinux: policy capability open_perms=1 Dec 13 03:43:07.097373 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 03:43:07.097380 kernel: SELinux: policy capability always_check_network=0 Dec 13 03:43:07.097385 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 03:43:07.097390 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 03:43:07.097396 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 03:43:07.097401 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 03:43:07.097406 systemd[1]: Successfully loaded SELinux policy in 315.838ms. Dec 13 03:43:07.097413 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.240ms. Dec 13 03:43:07.097421 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 03:43:07.097427 systemd[1]: Detected architecture x86-64. Dec 13 03:43:07.097433 systemd[1]: Detected first boot. Dec 13 03:43:07.097439 systemd[1]: Hostname set to . Dec 13 03:43:07.097445 systemd[1]: Initializing machine ID from random generator. Dec 13 03:43:07.097452 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 03:43:07.097458 systemd[1]: Populated /etc with preset unit settings. Dec 13 03:43:07.097464 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:43:07.097470 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:43:07.097477 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:43:07.097483 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 03:43:07.097489 systemd[1]: Stopped initrd-switch-root.service. Dec 13 03:43:07.097495 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 03:43:07.097502 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 03:43:07.097508 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 03:43:07.097514 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 03:43:07.097520 systemd[1]: Created slice system-getty.slice. Dec 13 03:43:07.097526 systemd[1]: Created slice system-modprobe.slice. Dec 13 03:43:07.097532 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 03:43:07.097539 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 03:43:07.097545 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 03:43:07.097551 systemd[1]: Created slice user.slice. Dec 13 03:43:07.097558 systemd[1]: Started systemd-ask-password-console.path. Dec 13 03:43:07.097564 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 03:43:07.097570 systemd[1]: Set up automount boot.automount. Dec 13 03:43:07.097576 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 03:43:07.097586 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 03:43:07.097592 systemd[1]: Stopped target initrd-fs.target. Dec 13 03:43:07.097598 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 03:43:07.097606 systemd[1]: Reached target integritysetup.target. Dec 13 03:43:07.097612 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 03:43:07.097618 systemd[1]: Reached target remote-fs.target. Dec 13 03:43:07.097624 systemd[1]: Reached target slices.target. Dec 13 03:43:07.097630 systemd[1]: Reached target swap.target. Dec 13 03:43:07.097636 systemd[1]: Reached target torcx.target. Dec 13 03:43:07.097643 systemd[1]: Reached target veritysetup.target. Dec 13 03:43:07.097650 systemd[1]: Listening on systemd-coredump.socket. Dec 13 03:43:07.097656 systemd[1]: Listening on systemd-initctl.socket. Dec 13 03:43:07.097662 systemd[1]: Listening on systemd-networkd.socket. Dec 13 03:43:07.097669 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 03:43:07.097675 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 03:43:07.097682 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 03:43:07.097689 systemd[1]: Mounting dev-hugepages.mount... Dec 13 03:43:07.097695 systemd[1]: Mounting dev-mqueue.mount... Dec 13 03:43:07.097701 systemd[1]: Mounting media.mount... Dec 13 03:43:07.097708 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:43:07.097714 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 03:43:07.097721 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 03:43:07.097727 systemd[1]: Mounting tmp.mount... Dec 13 03:43:07.097733 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 03:43:07.097741 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:43:07.097747 systemd[1]: Starting kmod-static-nodes.service... Dec 13 03:43:07.097753 systemd[1]: Starting modprobe@configfs.service... Dec 13 03:43:07.097760 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:43:07.097766 systemd[1]: Starting modprobe@drm.service... Dec 13 03:43:07.097772 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:43:07.097779 systemd[1]: Starting modprobe@fuse.service... Dec 13 03:43:07.097785 systemd[1]: Starting modprobe@loop.service... Dec 13 03:43:07.097791 kernel: fuse: init (API version 7.34) Dec 13 03:43:07.097798 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 03:43:07.097805 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 03:43:07.097811 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 03:43:07.097817 kernel: loop: module loaded Dec 13 03:43:07.097823 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 03:43:07.097829 kernel: kauditd_printk_skb: 64 callbacks suppressed Dec 13 03:43:07.097835 kernel: audit: type=1131 audit(1734061386.724:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.097842 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 03:43:07.097849 kernel: audit: type=1131 audit(1734061386.813:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.097855 systemd[1]: Stopped systemd-journald.service. Dec 13 03:43:07.097862 kernel: audit: type=1130 audit(1734061386.876:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.097868 kernel: audit: type=1131 audit(1734061386.876:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.097873 kernel: audit: type=1334 audit(1734061386.960:118): prog-id=21 op=LOAD Dec 13 03:43:07.097879 kernel: audit: type=1334 audit(1734061386.978:119): prog-id=22 op=LOAD Dec 13 03:43:07.097885 kernel: audit: type=1334 audit(1734061386.996:120): prog-id=23 op=LOAD Dec 13 03:43:07.097891 kernel: audit: type=1334 audit(1734061387.014:121): prog-id=19 op=UNLOAD Dec 13 03:43:07.097897 systemd[1]: Starting systemd-journald.service... Dec 13 03:43:07.097903 kernel: audit: type=1334 audit(1734061387.014:122): prog-id=20 op=UNLOAD Dec 13 03:43:07.097909 systemd[1]: Starting systemd-modules-load.service... Dec 13 03:43:07.097915 kernel: audit: type=1305 audit(1734061387.093:123): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 03:43:07.097923 systemd-journald[1250]: Journal started Dec 13 03:43:07.097948 systemd-journald[1250]: Runtime Journal (/run/log/journal/b0f13c4ba5174a13ba8bf3d332f85476) is 8.0M, max 639.3M, 631.3M free. Dec 13 03:43:03.491000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 03:43:03.769000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 03:43:03.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 03:43:03.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 03:43:03.772000 audit: BPF prog-id=10 op=LOAD Dec 13 03:43:03.772000 audit: BPF prog-id=10 op=UNLOAD Dec 13 03:43:03.772000 audit: BPF prog-id=11 op=LOAD Dec 13 03:43:03.772000 audit: BPF prog-id=11 op=UNLOAD Dec 13 03:43:03.873000 audit[1141]: AVC avc: denied { associate } for pid=1141 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 03:43:03.873000 audit[1141]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a58e2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1124 pid=1141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:43:03.873000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 03:43:03.900000 audit[1141]: AVC avc: denied { associate } for pid=1141 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 03:43:03.900000 audit[1141]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a59b9 a2=1ed a3=0 items=2 ppid=1124 pid=1141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:43:03.900000 audit: CWD cwd="/" Dec 13 03:43:03.900000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:03.900000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:03.900000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 03:43:05.468000 audit: BPF prog-id=12 op=LOAD Dec 13 03:43:05.468000 audit: BPF prog-id=3 op=UNLOAD Dec 13 03:43:05.468000 audit: BPF prog-id=13 op=LOAD Dec 13 03:43:05.469000 audit: BPF prog-id=14 op=LOAD Dec 13 03:43:05.469000 audit: BPF prog-id=4 op=UNLOAD Dec 13 03:43:05.469000 audit: BPF prog-id=5 op=UNLOAD Dec 13 03:43:05.469000 audit: BPF prog-id=15 op=LOAD Dec 13 03:43:05.469000 audit: BPF prog-id=12 op=UNLOAD Dec 13 03:43:05.469000 audit: BPF prog-id=16 op=LOAD Dec 13 03:43:05.470000 audit: BPF prog-id=17 op=LOAD Dec 13 03:43:05.470000 audit: BPF prog-id=13 op=UNLOAD Dec 13 03:43:05.470000 audit: BPF prog-id=14 op=UNLOAD Dec 13 03:43:05.470000 audit: BPF prog-id=18 op=LOAD Dec 13 03:43:05.470000 audit: BPF prog-id=15 op=UNLOAD Dec 13 03:43:05.470000 audit: BPF prog-id=19 op=LOAD Dec 13 03:43:05.470000 audit: BPF prog-id=20 op=LOAD Dec 13 03:43:05.470000 audit: BPF prog-id=16 op=UNLOAD Dec 13 03:43:05.470000 audit: BPF prog-id=17 op=UNLOAD Dec 13 03:43:05.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:05.522000 audit: BPF prog-id=18 op=UNLOAD Dec 13 03:43:05.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:05.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:06.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:06.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:06.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:06.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:06.960000 audit: BPF prog-id=21 op=LOAD Dec 13 03:43:06.978000 audit: BPF prog-id=22 op=LOAD Dec 13 03:43:06.996000 audit: BPF prog-id=23 op=LOAD Dec 13 03:43:07.014000 audit: BPF prog-id=19 op=UNLOAD Dec 13 03:43:07.014000 audit: BPF prog-id=20 op=UNLOAD Dec 13 03:43:07.093000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 03:43:03.872803 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:43:05.468756 systemd[1]: Queued start job for default target multi-user.target. Dec 13 03:43:03.873300 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 03:43:05.472715 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 03:43:03.873312 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 03:43:03.873330 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:03Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 03:43:03.873336 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:03Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 03:43:03.873352 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:03Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 03:43:03.873359 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:03Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 03:43:03.873466 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:03Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 03:43:03.873488 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 03:43:03.873496 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 03:43:03.874247 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 03:43:03.874268 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 03:43:03.874278 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 03:43:03.874286 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 03:43:03.874295 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 03:43:03.874302 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 03:43:05.094582 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:05Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:43:05.094721 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:05Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:43:05.094776 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:05Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:43:05.094866 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:05Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:43:05.094895 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:05Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 03:43:05.094929 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-12-13T03:43:05Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 03:43:07.093000 audit[1250]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd2f7cd830 a2=4000 a3=7ffd2f7cd8cc items=0 ppid=1 pid=1250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:43:07.093000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 03:43:07.135633 systemd[1]: Starting systemd-network-generator.service... Dec 13 03:43:07.180617 systemd[1]: Starting systemd-remount-fs.service... Dec 13 03:43:07.206638 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 03:43:07.250108 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 03:43:07.250133 systemd[1]: Stopped verity-setup.service. Dec 13 03:43:07.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.294626 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:43:07.313627 systemd[1]: Started systemd-journald.service. Dec 13 03:43:07.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.322140 systemd[1]: Mounted dev-hugepages.mount. Dec 13 03:43:07.329860 systemd[1]: Mounted dev-mqueue.mount. Dec 13 03:43:07.336857 systemd[1]: Mounted media.mount. Dec 13 03:43:07.343860 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 03:43:07.352845 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 03:43:07.361819 systemd[1]: Mounted tmp.mount. Dec 13 03:43:07.368902 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 03:43:07.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.376929 systemd[1]: Finished kmod-static-nodes.service. Dec 13 03:43:07.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.384965 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 03:43:07.385083 systemd[1]: Finished modprobe@configfs.service. Dec 13 03:43:07.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.394032 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:43:07.394171 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:43:07.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.403156 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 03:43:07.403352 systemd[1]: Finished modprobe@drm.service. Dec 13 03:43:07.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.412408 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:43:07.412745 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:43:07.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.421440 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 03:43:07.421771 systemd[1]: Finished modprobe@fuse.service. Dec 13 03:43:07.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.430412 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:43:07.430746 systemd[1]: Finished modprobe@loop.service. Dec 13 03:43:07.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.439435 systemd[1]: Finished systemd-modules-load.service. Dec 13 03:43:07.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.448394 systemd[1]: Finished systemd-network-generator.service. Dec 13 03:43:07.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.457416 systemd[1]: Finished systemd-remount-fs.service. Dec 13 03:43:07.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.466392 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 03:43:07.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.476017 systemd[1]: Reached target network-pre.target. Dec 13 03:43:07.487407 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 03:43:07.496301 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 03:43:07.503792 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 03:43:07.504813 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 03:43:07.512235 systemd[1]: Starting systemd-journal-flush.service... Dec 13 03:43:07.516560 systemd-journald[1250]: Time spent on flushing to /var/log/journal/b0f13c4ba5174a13ba8bf3d332f85476 is 14.705ms for 1610 entries. Dec 13 03:43:07.516560 systemd-journald[1250]: System Journal (/var/log/journal/b0f13c4ba5174a13ba8bf3d332f85476) is 8.0M, max 195.6M, 187.6M free. Dec 13 03:43:07.562505 systemd-journald[1250]: Received client request to flush runtime journal. Dec 13 03:43:07.528712 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:43:07.529178 systemd[1]: Starting systemd-random-seed.service... Dec 13 03:43:07.545697 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:43:07.546200 systemd[1]: Starting systemd-sysctl.service... Dec 13 03:43:07.553173 systemd[1]: Starting systemd-sysusers.service... Dec 13 03:43:07.560176 systemd[1]: Starting systemd-udev-settle.service... Dec 13 03:43:07.567856 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 03:43:07.575761 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 03:43:07.583815 systemd[1]: Finished systemd-journal-flush.service. Dec 13 03:43:07.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.591839 systemd[1]: Finished systemd-random-seed.service. Dec 13 03:43:07.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.599811 systemd[1]: Finished systemd-sysctl.service. Dec 13 03:43:07.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.607792 systemd[1]: Finished systemd-sysusers.service. Dec 13 03:43:07.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.616761 systemd[1]: Reached target first-boot-complete.target. Dec 13 03:43:07.624921 udevadm[1266]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 03:43:07.810719 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 03:43:07.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.820000 audit: BPF prog-id=24 op=LOAD Dec 13 03:43:07.820000 audit: BPF prog-id=25 op=LOAD Dec 13 03:43:07.820000 audit: BPF prog-id=7 op=UNLOAD Dec 13 03:43:07.820000 audit: BPF prog-id=8 op=UNLOAD Dec 13 03:43:07.821875 systemd[1]: Starting systemd-udevd.service... Dec 13 03:43:07.833943 systemd-udevd[1267]: Using default interface naming scheme 'v252'. Dec 13 03:43:07.850669 systemd[1]: Started systemd-udevd.service. Dec 13 03:43:07.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:07.860709 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Dec 13 03:43:07.859000 audit: BPF prog-id=26 op=LOAD Dec 13 03:43:07.861958 systemd[1]: Starting systemd-networkd.service... Dec 13 03:43:07.886000 audit: BPF prog-id=27 op=LOAD Dec 13 03:43:07.905266 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Dec 13 03:43:07.905328 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 03:43:07.905344 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sdb6 scanned by (udev-worker) (1336) Dec 13 03:43:07.929732 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 03:43:07.928000 audit: BPF prog-id=28 op=LOAD Dec 13 03:43:07.949000 audit: BPF prog-id=29 op=LOAD Dec 13 03:43:07.951268 systemd[1]: Starting systemd-userdbd.service... Dec 13 03:43:07.951756 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 03:43:07.970650 kernel: ACPI: button: Power Button [PWRF] Dec 13 03:43:08.008104 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 03:43:08.009587 kernel: IPMI message handler: version 39.2 Dec 13 03:43:07.932000 audit[1333]: AVC avc: denied { confidentiality } for pid=1333 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 03:43:07.932000 audit[1333]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7fa7d162a010 a1=4d98c a2=7fa7d32ecbc5 a3=5 items=42 ppid=1267 pid=1333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:43:07.932000 audit: CWD cwd="/" Dec 13 03:43:07.932000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=1 name=(null) inode=22864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=2 name=(null) inode=22864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=3 name=(null) inode=22865 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=4 name=(null) inode=22864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=5 name=(null) inode=22866 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=6 name=(null) inode=22864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=7 name=(null) inode=22867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=8 name=(null) inode=22867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=9 name=(null) inode=22868 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=10 name=(null) inode=22867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=11 name=(null) inode=22869 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=12 name=(null) inode=22867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=13 name=(null) inode=22870 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=14 name=(null) inode=22867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=15 name=(null) inode=22871 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=16 name=(null) inode=22867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=17 name=(null) inode=22872 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=18 name=(null) inode=22864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=19 name=(null) inode=22873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=20 name=(null) inode=22873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=21 name=(null) inode=22874 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=22 name=(null) inode=22873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=23 name=(null) inode=22875 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=24 name=(null) inode=22873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=25 name=(null) inode=22876 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=26 name=(null) inode=22873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=27 name=(null) inode=22877 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=28 name=(null) inode=22873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=29 name=(null) inode=22878 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=30 name=(null) inode=22864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=31 name=(null) inode=22879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=32 name=(null) inode=22879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=33 name=(null) inode=22880 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=34 name=(null) inode=22879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=35 name=(null) inode=22881 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=36 name=(null) inode=22879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=37 name=(null) inode=22882 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=38 name=(null) inode=22879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=39 name=(null) inode=22883 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=40 name=(null) inode=22879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PATH item=41 name=(null) inode=22884 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:43:07.932000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 03:43:08.048436 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Dec 13 03:43:08.090624 kernel: ipmi device interface Dec 13 03:43:08.090644 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Dec 13 03:43:08.090737 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Dec 13 03:43:08.096483 systemd[1]: Started systemd-userdbd.service. Dec 13 03:43:08.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:08.141585 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Dec 13 03:43:08.141716 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Dec 13 03:43:08.161585 kernel: ipmi_si: IPMI System Interface driver Dec 13 03:43:08.199133 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Dec 13 03:43:08.239209 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Dec 13 03:43:08.239223 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Dec 13 03:43:08.239234 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Dec 13 03:43:08.343111 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Dec 13 03:43:08.343227 kernel: iTCO_vendor_support: vendor-support=0 Dec 13 03:43:08.343246 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Dec 13 03:43:08.343338 kernel: ipmi_si: Adding ACPI-specified kcs state machine Dec 13 03:43:08.343358 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Dec 13 03:43:08.340866 systemd-networkd[1310]: bond0: netdev ready Dec 13 03:43:08.343420 systemd-networkd[1310]: lo: Link UP Dec 13 03:43:08.343424 systemd-networkd[1310]: lo: Gained carrier Dec 13 03:43:08.343987 systemd-networkd[1310]: Enumeration completed Dec 13 03:43:08.344055 systemd[1]: Started systemd-networkd.service. Dec 13 03:43:08.344341 systemd-networkd[1310]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Dec 13 03:43:08.365709 systemd-networkd[1310]: enp2s0f1np1: Configuring with /etc/systemd/network/10-04:3f:72:d7:69:1b.network. Dec 13 03:43:08.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:08.391585 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Dec 13 03:43:08.435647 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Dec 13 03:43:08.458586 kernel: intel_rapl_common: Found RAPL domain package Dec 13 03:43:08.458620 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Dec 13 03:43:08.458715 kernel: intel_rapl_common: Found RAPL domain core Dec 13 03:43:08.458730 kernel: intel_rapl_common: Found RAPL domain uncore Dec 13 03:43:08.458742 kernel: intel_rapl_common: Found RAPL domain dram Dec 13 03:43:08.595639 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Dec 13 03:43:08.595772 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Dec 13 03:43:08.613630 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Dec 13 03:43:08.632982 systemd-networkd[1310]: enp2s0f0np0: Configuring with /etc/systemd/network/10-04:3f:72:d7:69:1a.network. Dec 13 03:43:08.651585 kernel: ipmi_ssif: IPMI SSIF Interface driver Dec 13 03:43:08.677633 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 03:43:08.814641 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 03:43:08.841620 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Dec 13 03:43:08.862637 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Dec 13 03:43:08.882588 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Dec 13 03:43:08.892118 systemd-networkd[1310]: bond0: Link UP Dec 13 03:43:08.892332 systemd-networkd[1310]: enp2s0f1np1: Link UP Dec 13 03:43:08.892460 systemd-networkd[1310]: enp2s0f1np1: Gained carrier Dec 13 03:43:08.893541 systemd-networkd[1310]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-04:3f:72:d7:69:1a.network. Dec 13 03:43:08.938628 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:08.940844 systemd[1]: Finished systemd-udev-settle.service. Dec 13 03:43:08.958647 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:08.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:08.974352 systemd[1]: Starting lvm2-activation-early.service... Dec 13 03:43:08.978606 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:08.991159 lvm[1373]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 03:43:08.999584 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.019619 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.039584 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.059623 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.064043 systemd[1]: Finished lvm2-activation-early.service. Dec 13 03:43:09.078646 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:09.093719 systemd[1]: Reached target cryptsetup.target. Dec 13 03:43:09.098634 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.114294 systemd[1]: Starting lvm2-activation.service... Dec 13 03:43:09.116732 lvm[1374]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 03:43:09.117627 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.136633 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.154609 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.172584 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.178140 systemd[1]: Finished lvm2-activation.service. Dec 13 03:43:09.190603 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:09.205729 systemd[1]: Reached target local-fs-pre.target. Dec 13 03:43:09.208584 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.223696 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 03:43:09.223710 systemd[1]: Reached target local-fs.target. Dec 13 03:43:09.226583 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.241686 systemd[1]: Reached target machines.target. Dec 13 03:43:09.244603 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.259277 systemd[1]: Starting ldconfig.service... Dec 13 03:43:09.262626 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.279921 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:43:09.279943 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:43:09.280461 systemd[1]: Starting systemd-boot-update.service... Dec 13 03:43:09.280607 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.298642 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.307822 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 03:43:09.314605 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.315012 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 03:43:09.315558 systemd[1]: Starting systemd-sysext.service... Dec 13 03:43:09.315769 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1376 (bootctl) Dec 13 03:43:09.316305 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 03:43:09.330585 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.330947 systemd-networkd[1310]: enp2s0f0np0: Link UP Dec 13 03:43:09.331218 systemd-networkd[1310]: bond0: Gained carrier Dec 13 03:43:09.331308 systemd-networkd[1310]: enp2s0f0np0: Gained carrier Dec 13 03:43:09.362022 kernel: bond0: (slave enp2s0f1np1): link status down again after 200 ms Dec 13 03:43:09.362054 kernel: bond0: (slave enp2s0f1np1): link status definitely down, disabling slave Dec 13 03:43:09.362070 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 03:43:09.392987 systemd-networkd[1310]: enp2s0f1np1: Link DOWN Dec 13 03:43:09.392991 systemd-networkd[1310]: enp2s0f1np1: Lost carrier Dec 13 03:43:09.393584 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Dec 13 03:43:09.393606 kernel: bond0: active interface up! Dec 13 03:43:09.403942 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 03:43:09.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:09.405984 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 03:43:09.421808 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 03:43:09.421895 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 03:43:09.474639 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 03:43:09.476631 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 03:43:09.476962 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 03:43:09.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:09.508585 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 03:43:09.523465 systemd-fsck[1385]: fsck.fat 4.2 (2021-01-31) Dec 13 03:43:09.523465 systemd-fsck[1385]: /dev/sdb1: 789 files, 119291/258078 clusters Dec 13 03:43:09.524233 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 03:43:09.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:09.535614 systemd[1]: Mounting boot.mount... Dec 13 03:43:09.554583 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 03:43:09.554614 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Dec 13 03:43:09.555079 systemd[1]: Mounted boot.mount. Dec 13 03:43:09.566388 (sd-sysext)[1389]: Using extensions 'kubernetes'. Dec 13 03:43:09.566563 (sd-sysext)[1389]: Merged extensions into '/usr'. Dec 13 03:43:09.568757 systemd-networkd[1310]: enp2s0f1np1: Link UP Dec 13 03:43:09.568921 systemd-networkd[1310]: enp2s0f1np1: Gained carrier Dec 13 03:43:09.580947 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:43:09.581651 systemd[1]: Mounting usr-share-oem.mount... Dec 13 03:43:09.588722 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:43:09.589362 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:43:09.596137 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:43:09.611131 systemd[1]: Starting modprobe@loop.service... Dec 13 03:43:09.614638 kernel: bond0: (slave enp2s0f1np1): link status up, enabling it in 200 ms Dec 13 03:43:09.614663 kernel: bond0: (slave enp2s0f1np1): invalid new link 3 on slave Dec 13 03:43:09.632642 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:43:09.632712 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:43:09.632776 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:43:09.634451 systemd[1]: Finished systemd-boot-update.service. Dec 13 03:43:09.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:09.642777 systemd[1]: Mounted usr-share-oem.mount. Dec 13 03:43:09.649830 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:43:09.649906 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:43:09.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:09.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:09.657799 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:43:09.657861 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:43:09.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:09.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:09.665796 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:43:09.665854 systemd[1]: Finished modprobe@loop.service. Dec 13 03:43:09.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:09.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:09.673852 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:43:09.673910 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:43:09.674407 systemd[1]: Finished systemd-sysext.service. Dec 13 03:43:09.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:09.683177 systemd[1]: Starting ensure-sysext.service... Dec 13 03:43:09.690089 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 03:43:09.698741 systemd[1]: Reloading. Dec 13 03:43:09.701476 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 03:43:09.705012 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 03:43:09.710295 ldconfig[1375]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 03:43:09.710293 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 03:43:09.723068 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-12-13T03:43:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:43:09.723092 /usr/lib/systemd/system-generators/torcx-generator[1415]: time="2024-12-13T03:43:09Z" level=info msg="torcx already run" Dec 13 03:43:09.781406 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:43:09.781414 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:43:09.792398 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:43:09.837000 audit: BPF prog-id=30 op=LOAD Dec 13 03:43:09.837000 audit: BPF prog-id=26 op=UNLOAD Dec 13 03:43:09.839584 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Dec 13 03:43:09.838000 audit: BPF prog-id=31 op=LOAD Dec 13 03:43:09.838000 audit: BPF prog-id=27 op=UNLOAD Dec 13 03:43:09.838000 audit: BPF prog-id=32 op=LOAD Dec 13 03:43:09.838000 audit: BPF prog-id=33 op=LOAD Dec 13 03:43:09.838000 audit: BPF prog-id=28 op=UNLOAD Dec 13 03:43:09.838000 audit: BPF prog-id=29 op=UNLOAD Dec 13 03:43:09.838000 audit: BPF prog-id=34 op=LOAD Dec 13 03:43:09.839000 audit: BPF prog-id=21 op=UNLOAD Dec 13 03:43:09.839000 audit: BPF prog-id=35 op=LOAD Dec 13 03:43:09.839000 audit: BPF prog-id=36 op=LOAD Dec 13 03:43:09.839000 audit: BPF prog-id=22 op=UNLOAD Dec 13 03:43:09.839000 audit: BPF prog-id=23 op=UNLOAD Dec 13 03:43:09.840000 audit: BPF prog-id=37 op=LOAD Dec 13 03:43:09.840000 audit: BPF prog-id=38 op=LOAD Dec 13 03:43:09.840000 audit: BPF prog-id=24 op=UNLOAD Dec 13 03:43:09.840000 audit: BPF prog-id=25 op=UNLOAD Dec 13 03:43:09.843032 systemd[1]: Finished ldconfig.service. Dec 13 03:43:09.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:09.850199 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 03:43:09.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:43:09.861056 systemd[1]: Starting audit-rules.service... Dec 13 03:43:09.868239 systemd[1]: Starting clean-ca-certificates.service... Dec 13 03:43:09.877301 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 03:43:09.877000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 03:43:09.877000 audit[1494]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffddae82d90 a2=420 a3=0 items=0 ppid=1477 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:43:09.877000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 03:43:09.879461 augenrules[1494]: No rules Dec 13 03:43:09.886670 systemd[1]: Starting systemd-resolved.service... Dec 13 03:43:09.894684 systemd[1]: Starting systemd-timesyncd.service... Dec 13 03:43:09.902209 systemd[1]: Starting systemd-update-utmp.service... Dec 13 03:43:09.909126 systemd[1]: Finished audit-rules.service. Dec 13 03:43:09.915812 systemd[1]: Finished clean-ca-certificates.service. Dec 13 03:43:09.923805 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 03:43:09.936306 systemd[1]: Finished systemd-update-utmp.service. Dec 13 03:43:09.945152 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:43:09.945772 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:43:09.953197 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:43:09.960156 systemd[1]: Starting modprobe@loop.service... Dec 13 03:43:09.964584 systemd-resolved[1499]: Positive Trust Anchors: Dec 13 03:43:09.964592 systemd-resolved[1499]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 03:43:09.964611 systemd-resolved[1499]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 03:43:09.966638 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:43:09.966706 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:43:09.967411 systemd[1]: Starting systemd-update-done.service... Dec 13 03:43:09.968388 systemd-resolved[1499]: Using system hostname 'ci-3510.3.6-a-4c4d6acc59'. Dec 13 03:43:09.973616 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 03:43:09.974118 systemd[1]: Started systemd-timesyncd.service. Dec 13 03:43:09.982986 systemd[1]: Started systemd-resolved.service. Dec 13 03:43:09.990799 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:43:09.990866 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:43:09.998803 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:43:09.998864 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:43:10.006801 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:43:10.006860 systemd[1]: Finished modprobe@loop.service. Dec 13 03:43:10.014799 systemd[1]: Finished systemd-update-done.service. Dec 13 03:43:10.022835 systemd[1]: Reached target network.target. Dec 13 03:43:10.030659 systemd[1]: Reached target nss-lookup.target. Dec 13 03:43:10.038658 systemd[1]: Reached target time-set.target. Dec 13 03:43:10.046646 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:43:10.046720 systemd[1]: Reached target sysinit.target. Dec 13 03:43:10.054697 systemd[1]: Started motdgen.path. Dec 13 03:43:10.061675 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 03:43:10.071797 systemd[1]: Started logrotate.timer. Dec 13 03:43:10.078759 systemd[1]: Started mdadm.timer. Dec 13 03:43:10.085710 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 03:43:10.093685 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 03:43:10.093748 systemd[1]: Reached target paths.target. Dec 13 03:43:10.100723 systemd[1]: Reached target timers.target. Dec 13 03:43:10.107862 systemd[1]: Listening on dbus.socket. Dec 13 03:43:10.115337 systemd[1]: Starting docker.socket... Dec 13 03:43:10.123127 systemd[1]: Listening on sshd.socket. Dec 13 03:43:10.129752 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:43:10.129821 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:43:10.130632 systemd[1]: Listening on docker.socket. Dec 13 03:43:10.138522 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 03:43:10.138587 systemd[1]: Reached target sockets.target. Dec 13 03:43:10.146725 systemd[1]: Reached target basic.target. Dec 13 03:43:10.153724 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 03:43:10.153840 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 03:43:10.154407 systemd[1]: Starting containerd.service... Dec 13 03:43:10.162184 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 03:43:10.171228 systemd[1]: Starting coreos-metadata.service... Dec 13 03:43:10.178232 systemd[1]: Starting dbus.service... Dec 13 03:43:10.184244 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 03:43:10.188634 jq[1517]: false Dec 13 03:43:10.191174 systemd[1]: Starting extend-filesystems.service... Dec 13 03:43:10.191776 coreos-metadata[1510]: Dec 13 03:43:10.191 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 03:43:10.197148 dbus-daemon[1516]: [system] SELinux support is enabled Dec 13 03:43:10.197650 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 03:43:10.198642 systemd[1]: Starting modprobe@drm.service... Dec 13 03:43:10.198911 extend-filesystems[1518]: Found loop1 Dec 13 03:43:10.198911 extend-filesystems[1518]: Found sda Dec 13 03:43:10.235693 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Dec 13 03:43:10.235715 coreos-metadata[1513]: Dec 13 03:43:10.200 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 03:43:10.206303 systemd[1]: Starting motdgen.service... Dec 13 03:43:10.235886 extend-filesystems[1518]: Found sdb Dec 13 03:43:10.235886 extend-filesystems[1518]: Found sdb1 Dec 13 03:43:10.235886 extend-filesystems[1518]: Found sdb2 Dec 13 03:43:10.235886 extend-filesystems[1518]: Found sdb3 Dec 13 03:43:10.235886 extend-filesystems[1518]: Found usr Dec 13 03:43:10.235886 extend-filesystems[1518]: Found sdb4 Dec 13 03:43:10.235886 extend-filesystems[1518]: Found sdb6 Dec 13 03:43:10.235886 extend-filesystems[1518]: Found sdb7 Dec 13 03:43:10.235886 extend-filesystems[1518]: Found sdb9 Dec 13 03:43:10.235886 extend-filesystems[1518]: Checking size of /dev/sdb9 Dec 13 03:43:10.235886 extend-filesystems[1518]: Resized partition /dev/sdb9 Dec 13 03:43:10.229440 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 03:43:10.355784 extend-filesystems[1528]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 03:43:10.243250 systemd[1]: Starting sshd-keygen.service... Dec 13 03:43:10.257236 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 03:43:10.276666 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:43:10.371903 update_engine[1547]: I1213 03:43:10.345273 1547 main.cc:92] Flatcar Update Engine starting Dec 13 03:43:10.371903 update_engine[1547]: I1213 03:43:10.348501 1547 update_check_scheduler.cc:74] Next update check in 9m0s Dec 13 03:43:10.277324 systemd[1]: Starting tcsd.service... Dec 13 03:43:10.372070 jq[1548]: true Dec 13 03:43:10.294989 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 03:43:10.295379 systemd[1]: Starting update-engine.service... Dec 13 03:43:10.314314 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 03:43:10.329651 systemd[1]: Started dbus.service. Dec 13 03:43:10.349416 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 03:43:10.349506 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 03:43:10.349726 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 03:43:10.349791 systemd[1]: Finished modprobe@drm.service. Dec 13 03:43:10.363904 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 03:43:10.363984 systemd[1]: Finished motdgen.service. Dec 13 03:43:10.378959 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 03:43:10.379041 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 03:43:10.389407 jq[1550]: true Dec 13 03:43:10.390054 systemd[1]: Finished ensure-sysext.service. Dec 13 03:43:10.398689 env[1551]: time="2024-12-13T03:43:10.398572450Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 03:43:10.398841 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Dec 13 03:43:10.398928 systemd[1]: Condition check resulted in tcsd.service being skipped. Dec 13 03:43:10.406987 env[1551]: time="2024-12-13T03:43:10.406968169Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 03:43:10.407062 env[1551]: time="2024-12-13T03:43:10.407051400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:43:10.407667 env[1551]: time="2024-12-13T03:43:10.407649922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 03:43:10.407702 env[1551]: time="2024-12-13T03:43:10.407666952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:43:10.407792 env[1551]: time="2024-12-13T03:43:10.407780996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 03:43:10.407816 env[1551]: time="2024-12-13T03:43:10.407791287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 03:43:10.407816 env[1551]: time="2024-12-13T03:43:10.407798365Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 03:43:10.407816 env[1551]: time="2024-12-13T03:43:10.407803678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 03:43:10.407860 env[1551]: time="2024-12-13T03:43:10.407841534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:43:10.407988 env[1551]: time="2024-12-13T03:43:10.407955915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:43:10.408053 env[1551]: time="2024-12-13T03:43:10.408019194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 03:43:10.408053 env[1551]: time="2024-12-13T03:43:10.408028585Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 03:43:10.408090 env[1551]: time="2024-12-13T03:43:10.408053764Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 03:43:10.408090 env[1551]: time="2024-12-13T03:43:10.408060821Z" level=info msg="metadata content store policy set" policy=shared Dec 13 03:43:10.410079 systemd[1]: Started update-engine.service. Dec 13 03:43:10.418810 systemd[1]: Started locksmithd.service. Dec 13 03:43:10.423416 env[1551]: time="2024-12-13T03:43:10.423403407Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 03:43:10.425511 env[1551]: time="2024-12-13T03:43:10.423420324Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 03:43:10.425511 env[1551]: time="2024-12-13T03:43:10.423427895Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 03:43:10.425511 env[1551]: time="2024-12-13T03:43:10.423446374Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 03:43:10.425511 env[1551]: time="2024-12-13T03:43:10.423454239Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 03:43:10.425511 env[1551]: time="2024-12-13T03:43:10.423462150Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 03:43:10.425511 env[1551]: time="2024-12-13T03:43:10.423469253Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 03:43:10.425511 env[1551]: time="2024-12-13T03:43:10.423476560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 03:43:10.425511 env[1551]: time="2024-12-13T03:43:10.423483281Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 03:43:10.425511 env[1551]: time="2024-12-13T03:43:10.423491213Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 03:43:10.425511 env[1551]: time="2024-12-13T03:43:10.423498472Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 03:43:10.425511 env[1551]: time="2024-12-13T03:43:10.423504906Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 03:43:10.425648 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 03:43:10.425667 systemd[1]: Reached target system-config.target. Dec 13 03:43:10.425943 env[1551]: time="2024-12-13T03:43:10.425929819Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 03:43:10.425992 env[1551]: time="2024-12-13T03:43:10.425980945Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 03:43:10.426150 env[1551]: time="2024-12-13T03:43:10.426111984Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 03:43:10.426150 env[1551]: time="2024-12-13T03:43:10.426127733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 03:43:10.426150 env[1551]: time="2024-12-13T03:43:10.426135462Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 03:43:10.426211 env[1551]: time="2024-12-13T03:43:10.426164305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 03:43:10.426211 env[1551]: time="2024-12-13T03:43:10.426176452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 03:43:10.426211 env[1551]: time="2024-12-13T03:43:10.426183748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 03:43:10.426211 env[1551]: time="2024-12-13T03:43:10.426189775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 03:43:10.426211 env[1551]: time="2024-12-13T03:43:10.426196142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 03:43:10.426211 env[1551]: time="2024-12-13T03:43:10.426202885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 03:43:10.426211 env[1551]: time="2024-12-13T03:43:10.426209059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 03:43:10.426313 env[1551]: time="2024-12-13T03:43:10.426215034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 03:43:10.426313 env[1551]: time="2024-12-13T03:43:10.426222766Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 03:43:10.426313 env[1551]: time="2024-12-13T03:43:10.426283519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 03:43:10.426313 env[1551]: time="2024-12-13T03:43:10.426292503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 03:43:10.426313 env[1551]: time="2024-12-13T03:43:10.426299029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 03:43:10.426313 env[1551]: time="2024-12-13T03:43:10.426305528Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 03:43:10.426427 env[1551]: time="2024-12-13T03:43:10.426315586Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 03:43:10.426427 env[1551]: time="2024-12-13T03:43:10.426322822Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 03:43:10.426427 env[1551]: time="2024-12-13T03:43:10.426333095Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 03:43:10.426427 env[1551]: time="2024-12-13T03:43:10.426354310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 03:43:10.426492 env[1551]: time="2024-12-13T03:43:10.426468144Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 03:43:10.429092 env[1551]: time="2024-12-13T03:43:10.426499202Z" level=info msg="Connect containerd service" Dec 13 03:43:10.429092 env[1551]: time="2024-12-13T03:43:10.426517521Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 03:43:10.429092 env[1551]: time="2024-12-13T03:43:10.426779176Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 03:43:10.429092 env[1551]: time="2024-12-13T03:43:10.426881236Z" level=info msg="Start subscribing containerd event" Dec 13 03:43:10.429092 env[1551]: time="2024-12-13T03:43:10.426898771Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 03:43:10.429092 env[1551]: time="2024-12-13T03:43:10.426914107Z" level=info msg="Start recovering state" Dec 13 03:43:10.429092 env[1551]: time="2024-12-13T03:43:10.426926552Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 03:43:10.429092 env[1551]: time="2024-12-13T03:43:10.426955609Z" level=info msg="containerd successfully booted in 0.028762s" Dec 13 03:43:10.429092 env[1551]: time="2024-12-13T03:43:10.426961970Z" level=info msg="Start event monitor" Dec 13 03:43:10.429092 env[1551]: time="2024-12-13T03:43:10.426971924Z" level=info msg="Start snapshots syncer" Dec 13 03:43:10.429092 env[1551]: time="2024-12-13T03:43:10.426991102Z" level=info msg="Start cni network conf syncer for default" Dec 13 03:43:10.429092 env[1551]: time="2024-12-13T03:43:10.426996576Z" level=info msg="Start streaming server" Dec 13 03:43:10.431657 bash[1583]: Updated "/home/core/.ssh/authorized_keys" Dec 13 03:43:10.432628 systemd-networkd[1310]: bond0: Gained IPv6LL Dec 13 03:43:10.432807 systemd-timesyncd[1500]: Network configuration changed, trying to establish connection. Dec 13 03:43:10.434301 sshd_keygen[1544]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 03:43:10.434975 systemd[1]: Starting systemd-logind.service... Dec 13 03:43:10.442698 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 03:43:10.442721 systemd[1]: Reached target user-config.target. Dec 13 03:43:10.451694 systemd[1]: Started containerd.service. Dec 13 03:43:10.458372 systemd-logind[1589]: Watching system buttons on /dev/input/event3 (Power Button) Dec 13 03:43:10.458383 systemd-logind[1589]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 03:43:10.458392 systemd-logind[1589]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Dec 13 03:43:10.458491 systemd-logind[1589]: New seat seat0. Dec 13 03:43:10.458968 systemd[1]: Finished sshd-keygen.service. Dec 13 03:43:10.466766 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 03:43:10.476814 systemd[1]: Started systemd-logind.service. Dec 13 03:43:10.478971 locksmithd[1585]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 03:43:10.484745 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:43:10.485571 systemd[1]: Starting issuegen.service... Dec 13 03:43:10.492614 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:43:10.492966 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 03:43:10.493047 systemd[1]: Finished issuegen.service. Dec 13 03:43:10.500641 systemd[1]: Starting systemd-user-sessions.service... Dec 13 03:43:10.508890 systemd[1]: Finished systemd-user-sessions.service. Dec 13 03:43:10.517540 systemd[1]: Started getty@tty1.service. Dec 13 03:43:10.525285 systemd[1]: Started serial-getty@ttyS1.service. Dec 13 03:43:10.533782 systemd[1]: Reached target getty.target. Dec 13 03:43:10.687925 systemd-timesyncd[1500]: Network configuration changed, trying to establish connection. Dec 13 03:43:10.688040 systemd-timesyncd[1500]: Network configuration changed, trying to establish connection. Dec 13 03:43:10.688876 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 03:43:10.704632 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Dec 13 03:43:10.704870 systemd[1]: Reached target network-online.target. Dec 13 03:43:10.717020 systemd[1]: Starting kubelet.service... Dec 13 03:43:10.739335 extend-filesystems[1528]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Dec 13 03:43:10.739335 extend-filesystems[1528]: old_desc_blocks = 1, new_desc_blocks = 56 Dec 13 03:43:10.739335 extend-filesystems[1528]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Dec 13 03:43:10.790724 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Dec 13 03:43:10.791039 extend-filesystems[1518]: Resized filesystem in /dev/sdb9 Dec 13 03:43:10.742566 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 03:43:10.743061 systemd[1]: Finished extend-filesystems.service. Dec 13 03:43:11.523953 systemd[1]: Started kubelet.service. Dec 13 03:43:12.244349 kubelet[1619]: E1213 03:43:12.244287 1619 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 03:43:12.245788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 03:43:12.245886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 03:43:15.546958 login[1608]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 03:43:15.553690 login[1607]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 03:43:15.555436 systemd-logind[1589]: New session 1 of user core. Dec 13 03:43:15.555994 systemd[1]: Created slice user-500.slice. Dec 13 03:43:15.556617 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 03:43:15.557999 systemd-logind[1589]: New session 2 of user core. Dec 13 03:43:15.561789 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 03:43:15.562505 systemd[1]: Starting user@500.service... Dec 13 03:43:15.577787 (systemd)[1639]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:15.658671 systemd[1639]: Queued start job for default target default.target. Dec 13 03:43:15.658903 systemd[1639]: Reached target paths.target. Dec 13 03:43:15.658915 systemd[1639]: Reached target sockets.target. Dec 13 03:43:15.658923 systemd[1639]: Reached target timers.target. Dec 13 03:43:15.658930 systemd[1639]: Reached target basic.target. Dec 13 03:43:15.658948 systemd[1639]: Reached target default.target. Dec 13 03:43:15.658963 systemd[1639]: Startup finished in 70ms. Dec 13 03:43:15.659010 systemd[1]: Started user@500.service. Dec 13 03:43:15.659572 systemd[1]: Started session-1.scope. Dec 13 03:43:15.659972 systemd[1]: Started session-2.scope. Dec 13 03:43:16.320899 coreos-metadata[1513]: Dec 13 03:43:16.320 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Dec 13 03:43:16.321687 coreos-metadata[1510]: Dec 13 03:43:16.320 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Dec 13 03:43:17.022636 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:2 port 2:2 Dec 13 03:43:17.029621 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Dec 13 03:43:17.321265 coreos-metadata[1513]: Dec 13 03:43:17.321 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 03:43:17.322039 coreos-metadata[1510]: Dec 13 03:43:17.321 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 03:43:17.630929 systemd[1]: Created slice system-sshd.slice. Dec 13 03:43:17.631512 systemd[1]: Started sshd@0-145.40.90.151:22-139.178.68.195:39214.service. Dec 13 03:43:17.681051 sshd[1660]: Accepted publickey for core from 139.178.68.195 port 39214 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:17.682541 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:17.687660 systemd-logind[1589]: New session 3 of user core. Dec 13 03:43:17.688875 systemd[1]: Started session-3.scope. Dec 13 03:43:17.736775 coreos-metadata[1510]: Dec 13 03:43:17.736 INFO Fetch successful Dec 13 03:43:17.746004 systemd[1]: Started sshd@1-145.40.90.151:22-139.178.68.195:55482.service. Dec 13 03:43:17.774979 unknown[1510]: wrote ssh authorized keys file for user: core Dec 13 03:43:17.782045 sshd[1665]: Accepted publickey for core from 139.178.68.195 port 55482 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:17.782769 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:17.785077 systemd-logind[1589]: New session 4 of user core. Dec 13 03:43:17.785526 systemd[1]: Started session-4.scope. Dec 13 03:43:17.787094 update-ssh-keys[1667]: Updated "/home/core/.ssh/authorized_keys" Dec 13 03:43:17.787391 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 03:43:17.836316 sshd[1665]: pam_unix(sshd:session): session closed for user core Dec 13 03:43:17.838796 systemd[1]: sshd@1-145.40.90.151:22-139.178.68.195:55482.service: Deactivated successfully. Dec 13 03:43:17.839317 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 03:43:17.839987 systemd-logind[1589]: Session 4 logged out. Waiting for processes to exit. Dec 13 03:43:17.840971 systemd[1]: Started sshd@2-145.40.90.151:22-139.178.68.195:55498.service. Dec 13 03:43:17.841741 systemd-logind[1589]: Removed session 4. Dec 13 03:43:17.880668 sshd[1672]: Accepted publickey for core from 139.178.68.195 port 55498 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:17.881451 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:17.884126 systemd-logind[1589]: New session 5 of user core. Dec 13 03:43:17.884620 systemd[1]: Started session-5.scope. Dec 13 03:43:17.938731 sshd[1672]: pam_unix(sshd:session): session closed for user core Dec 13 03:43:17.939971 systemd[1]: sshd@2-145.40.90.151:22-139.178.68.195:55498.service: Deactivated successfully. Dec 13 03:43:17.940338 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 03:43:17.940740 systemd-logind[1589]: Session 5 logged out. Waiting for processes to exit. Dec 13 03:43:17.941323 systemd-logind[1589]: Removed session 5. Dec 13 03:43:18.620101 coreos-metadata[1513]: Dec 13 03:43:18.619 INFO Fetch successful Dec 13 03:43:18.698693 systemd[1]: Finished coreos-metadata.service. Dec 13 03:43:18.699590 systemd[1]: Started packet-phone-home.service. Dec 13 03:43:18.699726 systemd[1]: Reached target multi-user.target. Dec 13 03:43:18.700374 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 03:43:18.704709 curl[1679]: % Total % Received % Xferd Average Speed Time Time Time Current Dec 13 03:43:18.704678 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 03:43:18.704892 curl[1679]: Dload Upload Total Spent Left Speed Dec 13 03:43:18.704755 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 03:43:18.704885 systemd[1]: Startup finished in 2.031s (kernel) + 21.309s (initrd) + 15.552s (userspace) = 38.894s. Dec 13 03:43:19.024538 curl[1679]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Dec 13 03:43:19.027005 systemd[1]: packet-phone-home.service: Deactivated successfully. Dec 13 03:43:22.497846 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 03:43:22.498398 systemd[1]: Stopped kubelet.service. Dec 13 03:43:22.500727 systemd[1]: Starting kubelet.service... Dec 13 03:43:22.705774 systemd[1]: Started kubelet.service. Dec 13 03:43:22.759589 kubelet[1686]: E1213 03:43:22.759478 1686 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 03:43:22.762203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 03:43:22.762287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 03:43:25.735839 systemd[1]: Started sshd@3-145.40.90.151:22-157.230.83.38:51668.service. Dec 13 03:43:25.832403 sshd[1705]: kex_exchange_identification: Connection closed by remote host Dec 13 03:43:25.832403 sshd[1705]: Connection closed by 157.230.83.38 port 51668 Dec 13 03:43:25.833982 systemd[1]: sshd@3-145.40.90.151:22-157.230.83.38:51668.service: Deactivated successfully. Dec 13 03:43:27.948714 systemd[1]: Started sshd@4-145.40.90.151:22-139.178.68.195:47766.service. Dec 13 03:43:27.985455 sshd[1708]: Accepted publickey for core from 139.178.68.195 port 47766 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:27.986168 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:27.988523 systemd-logind[1589]: New session 6 of user core. Dec 13 03:43:27.989020 systemd[1]: Started session-6.scope. Dec 13 03:43:28.041360 sshd[1708]: pam_unix(sshd:session): session closed for user core Dec 13 03:43:28.042984 systemd[1]: sshd@4-145.40.90.151:22-139.178.68.195:47766.service: Deactivated successfully. Dec 13 03:43:28.043281 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 03:43:28.043545 systemd-logind[1589]: Session 6 logged out. Waiting for processes to exit. Dec 13 03:43:28.044092 systemd[1]: Started sshd@5-145.40.90.151:22-139.178.68.195:47774.service. Dec 13 03:43:28.044522 systemd-logind[1589]: Removed session 6. Dec 13 03:43:28.081055 sshd[1714]: Accepted publickey for core from 139.178.68.195 port 47774 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:28.081919 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:28.084928 systemd-logind[1589]: New session 7 of user core. Dec 13 03:43:28.085554 systemd[1]: Started session-7.scope. Dec 13 03:43:28.137029 sshd[1714]: pam_unix(sshd:session): session closed for user core Dec 13 03:43:28.138634 systemd[1]: sshd@5-145.40.90.151:22-139.178.68.195:47774.service: Deactivated successfully. Dec 13 03:43:28.138943 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 03:43:28.139329 systemd-logind[1589]: Session 7 logged out. Waiting for processes to exit. Dec 13 03:43:28.139846 systemd[1]: Started sshd@6-145.40.90.151:22-139.178.68.195:47788.service. Dec 13 03:43:28.140287 systemd-logind[1589]: Removed session 7. Dec 13 03:43:28.176541 sshd[1720]: Accepted publickey for core from 139.178.68.195 port 47788 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:28.177412 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:28.180344 systemd-logind[1589]: New session 8 of user core. Dec 13 03:43:28.180917 systemd[1]: Started session-8.scope. Dec 13 03:43:28.244411 sshd[1720]: pam_unix(sshd:session): session closed for user core Dec 13 03:43:28.251557 systemd[1]: sshd@6-145.40.90.151:22-139.178.68.195:47788.service: Deactivated successfully. Dec 13 03:43:28.253249 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 03:43:28.254862 systemd-logind[1589]: Session 8 logged out. Waiting for processes to exit. Dec 13 03:43:28.257664 systemd[1]: Started sshd@7-145.40.90.151:22-139.178.68.195:47800.service. Dec 13 03:43:28.260180 systemd-logind[1589]: Removed session 8. Dec 13 03:43:28.366444 sshd[1726]: Accepted publickey for core from 139.178.68.195 port 47800 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:43:28.368059 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:43:28.372767 systemd-logind[1589]: New session 9 of user core. Dec 13 03:43:28.373813 systemd[1]: Started session-9.scope. Dec 13 03:43:28.463062 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 03:43:28.463812 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 03:43:28.692612 systemd[1]: Started sshd@8-145.40.90.151:22-51.89.216.178:54880.service. Dec 13 03:43:29.365189 systemd[1]: Stopped kubelet.service. Dec 13 03:43:29.366645 systemd[1]: Starting kubelet.service... Dec 13 03:43:29.377148 systemd[1]: Reloading. Dec 13 03:43:29.403289 /usr/lib/systemd/system-generators/torcx-generator[1815]: time="2024-12-13T03:43:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:43:29.403311 /usr/lib/systemd/system-generators/torcx-generator[1815]: time="2024-12-13T03:43:29Z" level=info msg="torcx already run" Dec 13 03:43:29.456980 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:43:29.456989 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:43:29.469159 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:43:29.527812 sshd[1745]: Invalid user bvn from 51.89.216.178 port 54880 Dec 13 03:43:29.528297 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 03:43:29.528335 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 03:43:29.528439 systemd[1]: Stopped kubelet.service. Dec 13 03:43:29.529399 systemd[1]: Starting kubelet.service... Dec 13 03:43:29.530570 sshd[1745]: pam_faillock(sshd:auth): User unknown Dec 13 03:43:29.530761 sshd[1745]: pam_unix(sshd:auth): check pass; user unknown Dec 13 03:43:29.530778 sshd[1745]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=51.89.216.178 Dec 13 03:43:29.530934 sshd[1745]: pam_faillock(sshd:auth): User unknown Dec 13 03:43:29.758483 systemd[1]: Started kubelet.service. Dec 13 03:43:29.797486 kubelet[1880]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:43:29.797486 kubelet[1880]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 03:43:29.797486 kubelet[1880]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:43:29.800083 kubelet[1880]: I1213 03:43:29.800028 1880 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 03:43:30.128397 kubelet[1880]: I1213 03:43:30.128356 1880 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 03:43:30.128397 kubelet[1880]: I1213 03:43:30.128369 1880 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 03:43:30.128477 kubelet[1880]: I1213 03:43:30.128471 1880 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 03:43:30.137496 kubelet[1880]: I1213 03:43:30.137464 1880 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 03:43:30.220489 kubelet[1880]: I1213 03:43:30.220385 1880 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 03:43:30.224256 kubelet[1880]: I1213 03:43:30.224138 1880 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 03:43:30.224764 kubelet[1880]: I1213 03:43:30.224221 1880 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.67.80.25","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 03:43:30.226247 kubelet[1880]: I1213 03:43:30.226165 1880 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 03:43:30.226247 kubelet[1880]: I1213 03:43:30.226211 1880 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 03:43:30.226548 kubelet[1880]: I1213 03:43:30.226465 1880 state_mem.go:36] "Initialized new in-memory state store" Dec 13 03:43:30.228942 kubelet[1880]: I1213 03:43:30.228871 1880 kubelet.go:400] "Attempting to sync node with API server" Dec 13 03:43:30.228942 kubelet[1880]: I1213 03:43:30.228914 1880 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 03:43:30.229269 kubelet[1880]: I1213 03:43:30.228973 1880 kubelet.go:312] "Adding apiserver pod source" Dec 13 03:43:30.229269 kubelet[1880]: I1213 03:43:30.229033 1880 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 03:43:30.229269 kubelet[1880]: E1213 03:43:30.229198 1880 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:30.229700 kubelet[1880]: E1213 03:43:30.229389 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:30.245863 kubelet[1880]: I1213 03:43:30.245773 1880 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 03:43:30.261298 kubelet[1880]: I1213 03:43:30.261218 1880 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 03:43:30.261516 kubelet[1880]: W1213 03:43:30.261334 1880 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 03:43:30.262772 kubelet[1880]: I1213 03:43:30.262727 1880 server.go:1264] "Started kubelet" Dec 13 03:43:30.263015 kubelet[1880]: I1213 03:43:30.262905 1880 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 03:43:30.263206 kubelet[1880]: I1213 03:43:30.262995 1880 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 03:43:30.263686 kubelet[1880]: I1213 03:43:30.263634 1880 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 03:43:30.298138 kubelet[1880]: I1213 03:43:30.298065 1880 server.go:455] "Adding debug handlers to kubelet server" Dec 13 03:43:30.324209 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 03:43:30.324451 kubelet[1880]: I1213 03:43:30.324409 1880 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 03:43:30.324634 kubelet[1880]: I1213 03:43:30.324598 1880 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 03:43:30.324806 kubelet[1880]: I1213 03:43:30.324717 1880 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 03:43:30.325069 kubelet[1880]: I1213 03:43:30.325001 1880 reconciler.go:26] "Reconciler: start to sync state" Dec 13 03:43:30.326154 kubelet[1880]: I1213 03:43:30.326099 1880 factory.go:221] Registration of the systemd container factory successfully Dec 13 03:43:30.326420 kubelet[1880]: I1213 03:43:30.326368 1880 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 03:43:30.329202 kubelet[1880]: I1213 03:43:30.329155 1880 factory.go:221] Registration of the containerd container factory successfully Dec 13 03:43:30.332094 kubelet[1880]: E1213 03:43:30.332037 1880 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.80.25\" not found" node="10.67.80.25" Dec 13 03:43:30.335475 kubelet[1880]: E1213 03:43:30.335406 1880 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.67.80.25\" not found" Dec 13 03:43:30.343043 kubelet[1880]: E1213 03:43:30.342962 1880 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 03:43:30.362903 kubelet[1880]: I1213 03:43:30.362835 1880 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 03:43:30.362903 kubelet[1880]: I1213 03:43:30.362864 1880 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 03:43:30.362903 kubelet[1880]: I1213 03:43:30.362904 1880 state_mem.go:36] "Initialized new in-memory state store" Dec 13 03:43:30.367178 kubelet[1880]: I1213 03:43:30.367122 1880 policy_none.go:49] "None policy: Start" Dec 13 03:43:30.368526 kubelet[1880]: I1213 03:43:30.368461 1880 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 03:43:30.368526 kubelet[1880]: I1213 03:43:30.368505 1880 state_mem.go:35] "Initializing new in-memory state store" Dec 13 03:43:30.376233 systemd[1]: Created slice kubepods.slice. Dec 13 03:43:30.381978 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 03:43:30.385247 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 03:43:30.396169 kubelet[1880]: I1213 03:43:30.396112 1880 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 03:43:30.396271 kubelet[1880]: I1213 03:43:30.396242 1880 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 03:43:30.396352 kubelet[1880]: I1213 03:43:30.396344 1880 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 03:43:30.396967 kubelet[1880]: E1213 03:43:30.396956 1880 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.80.25\" not found" Dec 13 03:43:30.436426 kubelet[1880]: I1213 03:43:30.436382 1880 kubelet_node_status.go:73] "Attempting to register node" node="10.67.80.25" Dec 13 03:43:30.443583 kubelet[1880]: I1213 03:43:30.443542 1880 kubelet_node_status.go:76] "Successfully registered node" node="10.67.80.25" Dec 13 03:43:30.453199 kubelet[1880]: I1213 03:43:30.453154 1880 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 03:43:30.453350 env[1551]: time="2024-12-13T03:43:30.453327852Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 03:43:30.453520 kubelet[1880]: I1213 03:43:30.453426 1880 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 03:43:30.455065 kubelet[1880]: I1213 03:43:30.455049 1880 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 03:43:30.455616 kubelet[1880]: I1213 03:43:30.455607 1880 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 03:43:30.455616 kubelet[1880]: I1213 03:43:30.455617 1880 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 03:43:30.455670 kubelet[1880]: I1213 03:43:30.455626 1880 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 03:43:30.455670 kubelet[1880]: E1213 03:43:30.455650 1880 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 03:43:30.796904 sudo[1729]: pam_unix(sudo:session): session closed for user root Dec 13 03:43:30.801816 sshd[1726]: pam_unix(sshd:session): session closed for user core Dec 13 03:43:30.807745 systemd[1]: sshd@7-145.40.90.151:22-139.178.68.195:47800.service: Deactivated successfully. Dec 13 03:43:30.809512 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 03:43:30.811167 systemd-logind[1589]: Session 9 logged out. Waiting for processes to exit. Dec 13 03:43:30.813319 systemd-logind[1589]: Removed session 9. Dec 13 03:43:31.129848 kubelet[1880]: I1213 03:43:31.129542 1880 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 03:43:31.130755 kubelet[1880]: W1213 03:43:31.129949 1880 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 03:43:31.130755 kubelet[1880]: W1213 03:43:31.130012 1880 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 03:43:31.130755 kubelet[1880]: W1213 03:43:31.130085 1880 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 03:43:31.230445 kubelet[1880]: E1213 03:43:31.230309 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:31.230445 kubelet[1880]: I1213 03:43:31.230346 1880 apiserver.go:52] "Watching apiserver" Dec 13 03:43:31.238843 kubelet[1880]: I1213 03:43:31.238724 1880 topology_manager.go:215] "Topology Admit Handler" podUID="68d2b6f8-edb0-402d-82c6-37631e25292f" podNamespace="kube-system" podName="kube-proxy-5hvl4" Dec 13 03:43:31.239029 kubelet[1880]: I1213 03:43:31.238928 1880 topology_manager.go:215] "Topology Admit Handler" podUID="89fb18dd-ec05-4608-98d9-9a6b038c1982" podNamespace="kube-system" podName="cilium-dcl2v" Dec 13 03:43:31.243255 systemd[1]: Created slice kubepods-besteffort-pod68d2b6f8_edb0_402d_82c6_37631e25292f.slice. Dec 13 03:43:31.266485 systemd[1]: Created slice kubepods-burstable-pod89fb18dd_ec05_4608_98d9_9a6b038c1982.slice. Dec 13 03:43:31.326160 kubelet[1880]: I1213 03:43:31.326098 1880 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 03:43:31.330976 kubelet[1880]: I1213 03:43:31.330861 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89fb18dd-ec05-4608-98d9-9a6b038c1982-cilium-config-path\") pod \"cilium-dcl2v\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " pod="kube-system/cilium-dcl2v" Dec 13 03:43:31.330976 kubelet[1880]: I1213 03:43:31.330957 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-host-proc-sys-net\") pod \"cilium-dcl2v\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " pod="kube-system/cilium-dcl2v" Dec 13 03:43:31.331308 kubelet[1880]: I1213 03:43:31.331074 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-lib-modules\") pod \"cilium-dcl2v\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " pod="kube-system/cilium-dcl2v" Dec 13 03:43:31.331308 kubelet[1880]: I1213 03:43:31.331155 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-host-proc-sys-kernel\") pod \"cilium-dcl2v\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " pod="kube-system/cilium-dcl2v" Dec 13 03:43:31.331308 kubelet[1880]: I1213 03:43:31.331226 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89fb18dd-ec05-4608-98d9-9a6b038c1982-hubble-tls\") pod \"cilium-dcl2v\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " pod="kube-system/cilium-dcl2v" Dec 13 03:43:31.331308 kubelet[1880]: I1213 03:43:31.331286 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv2wq\" (UniqueName: \"kubernetes.io/projected/89fb18dd-ec05-4608-98d9-9a6b038c1982-kube-api-access-jv2wq\") pod \"cilium-dcl2v\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " pod="kube-system/cilium-dcl2v" Dec 13 03:43:31.331740 kubelet[1880]: I1213 03:43:31.331334 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68d2b6f8-edb0-402d-82c6-37631e25292f-lib-modules\") pod \"kube-proxy-5hvl4\" (UID: \"68d2b6f8-edb0-402d-82c6-37631e25292f\") " pod="kube-system/kube-proxy-5hvl4" Dec 13 03:43:31.331740 kubelet[1880]: I1213 03:43:31.331382 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99xtg\" (UniqueName: \"kubernetes.io/projected/68d2b6f8-edb0-402d-82c6-37631e25292f-kube-api-access-99xtg\") pod \"kube-proxy-5hvl4\" (UID: \"68d2b6f8-edb0-402d-82c6-37631e25292f\") " pod="kube-system/kube-proxy-5hvl4" Dec 13 03:43:31.331740 kubelet[1880]: I1213 03:43:31.331431 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-cilium-cgroup\") pod \"cilium-dcl2v\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " pod="kube-system/cilium-dcl2v" Dec 13 03:43:31.331740 kubelet[1880]: I1213 03:43:31.331476 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-etc-cni-netd\") pod \"cilium-dcl2v\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " pod="kube-system/cilium-dcl2v" Dec 13 03:43:31.331740 kubelet[1880]: I1213 03:43:31.331520 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-hostproc\") pod \"cilium-dcl2v\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " pod="kube-system/cilium-dcl2v" Dec 13 03:43:31.331740 kubelet[1880]: I1213 03:43:31.331643 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-cni-path\") pod \"cilium-dcl2v\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " pod="kube-system/cilium-dcl2v" Dec 13 03:43:31.332289 kubelet[1880]: I1213 03:43:31.331736 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89fb18dd-ec05-4608-98d9-9a6b038c1982-clustermesh-secrets\") pod \"cilium-dcl2v\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " pod="kube-system/cilium-dcl2v" Dec 13 03:43:31.332289 kubelet[1880]: I1213 03:43:31.331790 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/68d2b6f8-edb0-402d-82c6-37631e25292f-kube-proxy\") pod \"kube-proxy-5hvl4\" (UID: \"68d2b6f8-edb0-402d-82c6-37631e25292f\") " pod="kube-system/kube-proxy-5hvl4" Dec 13 03:43:31.332289 kubelet[1880]: I1213 03:43:31.331839 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68d2b6f8-edb0-402d-82c6-37631e25292f-xtables-lock\") pod \"kube-proxy-5hvl4\" (UID: \"68d2b6f8-edb0-402d-82c6-37631e25292f\") " pod="kube-system/kube-proxy-5hvl4" Dec 13 03:43:31.332289 kubelet[1880]: I1213 03:43:31.331908 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-cilium-run\") pod \"cilium-dcl2v\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " pod="kube-system/cilium-dcl2v" Dec 13 03:43:31.332289 kubelet[1880]: I1213 03:43:31.332012 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-bpf-maps\") pod \"cilium-dcl2v\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " pod="kube-system/cilium-dcl2v" Dec 13 03:43:31.332289 kubelet[1880]: I1213 03:43:31.332102 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-xtables-lock\") pod \"cilium-dcl2v\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " pod="kube-system/cilium-dcl2v" Dec 13 03:43:31.564128 env[1551]: time="2024-12-13T03:43:31.564038479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5hvl4,Uid:68d2b6f8-edb0-402d-82c6-37631e25292f,Namespace:kube-system,Attempt:0,}" Dec 13 03:43:31.593531 env[1551]: time="2024-12-13T03:43:31.593433534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dcl2v,Uid:89fb18dd-ec05-4608-98d9-9a6b038c1982,Namespace:kube-system,Attempt:0,}" Dec 13 03:43:31.741816 sshd[1745]: Failed password for invalid user bvn from 51.89.216.178 port 54880 ssh2 Dec 13 03:43:32.230636 kubelet[1880]: E1213 03:43:32.230472 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:32.271162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3471427803.mount: Deactivated successfully. Dec 13 03:43:32.272746 env[1551]: time="2024-12-13T03:43:32.272727757Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:32.273661 env[1551]: time="2024-12-13T03:43:32.273648604Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:32.274433 env[1551]: time="2024-12-13T03:43:32.274401864Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:32.275797 env[1551]: time="2024-12-13T03:43:32.275783997Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:32.277068 env[1551]: time="2024-12-13T03:43:32.277044657Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:32.277401 env[1551]: time="2024-12-13T03:43:32.277389032Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:32.278586 env[1551]: time="2024-12-13T03:43:32.278572174Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:32.279298 env[1551]: time="2024-12-13T03:43:32.279287805Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:32.304509 env[1551]: time="2024-12-13T03:43:32.304450130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:43:32.304509 env[1551]: time="2024-12-13T03:43:32.304475413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:43:32.304509 env[1551]: time="2024-12-13T03:43:32.304488213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:43:32.304632 env[1551]: time="2024-12-13T03:43:32.304567271Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a27fdf8fd52c366b00640f1b4c4795a0a2667b6ab00aaf6e008c23f07248a7d pid=1947 runtime=io.containerd.runc.v2 Dec 13 03:43:32.305144 env[1551]: time="2024-12-13T03:43:32.305091059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:43:32.305144 env[1551]: time="2024-12-13T03:43:32.305111171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:43:32.305144 env[1551]: time="2024-12-13T03:43:32.305119174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:43:32.305243 env[1551]: time="2024-12-13T03:43:32.305185004Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee pid=1954 runtime=io.containerd.runc.v2 Dec 13 03:43:32.311810 systemd[1]: Started cri-containerd-6a27fdf8fd52c366b00640f1b4c4795a0a2667b6ab00aaf6e008c23f07248a7d.scope. Dec 13 03:43:32.312769 systemd[1]: Started cri-containerd-f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee.scope. Dec 13 03:43:32.325014 env[1551]: time="2024-12-13T03:43:32.324977035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dcl2v,Uid:89fb18dd-ec05-4608-98d9-9a6b038c1982,Namespace:kube-system,Attempt:0,} returns sandbox id \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\"" Dec 13 03:43:32.326001 env[1551]: time="2024-12-13T03:43:32.325976876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5hvl4,Uid:68d2b6f8-edb0-402d-82c6-37631e25292f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a27fdf8fd52c366b00640f1b4c4795a0a2667b6ab00aaf6e008c23f07248a7d\"" Dec 13 03:43:32.326276 env[1551]: time="2024-12-13T03:43:32.326259299Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 03:43:32.478649 sshd[1745]: Received disconnect from 51.89.216.178 port 54880:11: Bye Bye [preauth] Dec 13 03:43:32.478649 sshd[1745]: Disconnected from invalid user bvn 51.89.216.178 port 54880 [preauth] Dec 13 03:43:32.481244 systemd[1]: sshd@8-145.40.90.151:22-51.89.216.178:54880.service: Deactivated successfully. Dec 13 03:43:33.231568 kubelet[1880]: E1213 03:43:33.231502 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:34.232330 kubelet[1880]: E1213 03:43:34.232309 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:35.232966 kubelet[1880]: E1213 03:43:35.232917 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:36.233798 kubelet[1880]: E1213 03:43:36.233748 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:36.816154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3781018564.mount: Deactivated successfully. Dec 13 03:43:37.234852 kubelet[1880]: E1213 03:43:37.234792 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:38.235847 kubelet[1880]: E1213 03:43:38.235799 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:38.463422 env[1551]: time="2024-12-13T03:43:38.463356431Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:38.464785 env[1551]: time="2024-12-13T03:43:38.464682222Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:38.465741 env[1551]: time="2024-12-13T03:43:38.465729208Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:38.466041 env[1551]: time="2024-12-13T03:43:38.466026732Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 03:43:38.466827 env[1551]: time="2024-12-13T03:43:38.466812926Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 03:43:38.467547 env[1551]: time="2024-12-13T03:43:38.467532520Z" level=info msg="CreateContainer within sandbox \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 03:43:38.472524 env[1551]: time="2024-12-13T03:43:38.472503237Z" level=info msg="CreateContainer within sandbox \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4\"" Dec 13 03:43:38.472966 env[1551]: time="2024-12-13T03:43:38.472935383Z" level=info msg="StartContainer for \"ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4\"" Dec 13 03:43:38.482735 systemd[1]: Started cri-containerd-ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4.scope. Dec 13 03:43:38.493588 env[1551]: time="2024-12-13T03:43:38.493530689Z" level=info msg="StartContainer for \"ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4\" returns successfully" Dec 13 03:43:38.498452 systemd[1]: cri-containerd-ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4.scope: Deactivated successfully. Dec 13 03:43:39.236722 kubelet[1880]: E1213 03:43:39.236643 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:39.475497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4-rootfs.mount: Deactivated successfully. Dec 13 03:43:39.599401 env[1551]: time="2024-12-13T03:43:39.599183665Z" level=info msg="shim disconnected" id=ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4 Dec 13 03:43:39.599401 env[1551]: time="2024-12-13T03:43:39.599296331Z" level=warning msg="cleaning up after shim disconnected" id=ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4 namespace=k8s.io Dec 13 03:43:39.599401 env[1551]: time="2024-12-13T03:43:39.599325425Z" level=info msg="cleaning up dead shim" Dec 13 03:43:39.611457 env[1551]: time="2024-12-13T03:43:39.611407719Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:43:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2062 runtime=io.containerd.runc.v2\n" Dec 13 03:43:40.053813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount674879052.mount: Deactivated successfully. Dec 13 03:43:40.237350 kubelet[1880]: E1213 03:43:40.237332 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:40.430568 env[1551]: time="2024-12-13T03:43:40.430488691Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:40.431155 env[1551]: time="2024-12-13T03:43:40.431142541Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:40.431754 env[1551]: time="2024-12-13T03:43:40.431721873Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:40.432434 env[1551]: time="2024-12-13T03:43:40.432419042Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:40.432783 env[1551]: time="2024-12-13T03:43:40.432742294Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 03:43:40.433945 env[1551]: time="2024-12-13T03:43:40.433910878Z" level=info msg="CreateContainer within sandbox \"6a27fdf8fd52c366b00640f1b4c4795a0a2667b6ab00aaf6e008c23f07248a7d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 03:43:40.440333 env[1551]: time="2024-12-13T03:43:40.440312879Z" level=info msg="CreateContainer within sandbox \"6a27fdf8fd52c366b00640f1b4c4795a0a2667b6ab00aaf6e008c23f07248a7d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"799e6b615f9a5180340441ef6af8fcd7057307e9eceafbcf75df94141e6dcc1b\"" Dec 13 03:43:40.440621 env[1551]: time="2024-12-13T03:43:40.440608479Z" level=info msg="StartContainer for \"799e6b615f9a5180340441ef6af8fcd7057307e9eceafbcf75df94141e6dcc1b\"" Dec 13 03:43:40.449113 systemd[1]: Started cri-containerd-799e6b615f9a5180340441ef6af8fcd7057307e9eceafbcf75df94141e6dcc1b.scope. Dec 13 03:43:40.463155 env[1551]: time="2024-12-13T03:43:40.463129023Z" level=info msg="StartContainer for \"799e6b615f9a5180340441ef6af8fcd7057307e9eceafbcf75df94141e6dcc1b\" returns successfully" Dec 13 03:43:40.478273 env[1551]: time="2024-12-13T03:43:40.478249709Z" level=info msg="CreateContainer within sandbox \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 03:43:40.482650 env[1551]: time="2024-12-13T03:43:40.482630238Z" level=info msg="CreateContainer within sandbox \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043\"" Dec 13 03:43:40.482867 env[1551]: time="2024-12-13T03:43:40.482822788Z" level=info msg="StartContainer for \"31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043\"" Dec 13 03:43:40.491436 systemd[1]: Started cri-containerd-31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043.scope. Dec 13 03:43:40.496515 kubelet[1880]: I1213 03:43:40.496479 1880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5hvl4" podStartSLOduration=1.389805777 podStartE2EDuration="9.496466654s" podCreationTimestamp="2024-12-13 03:43:31 +0000 UTC" firstStartedPulling="2024-12-13 03:43:32.32655849 +0000 UTC m=+2.563143759" lastFinishedPulling="2024-12-13 03:43:40.433219383 +0000 UTC m=+10.669804636" observedRunningTime="2024-12-13 03:43:40.496375317 +0000 UTC m=+10.732960571" watchObservedRunningTime="2024-12-13 03:43:40.496466654 +0000 UTC m=+10.733051904" Dec 13 03:43:40.502711 env[1551]: time="2024-12-13T03:43:40.502658859Z" level=info msg="StartContainer for \"31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043\" returns successfully" Dec 13 03:43:40.508876 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 03:43:40.508998 systemd[1]: Stopped systemd-sysctl.service. Dec 13 03:43:40.509100 systemd[1]: Stopping systemd-sysctl.service... Dec 13 03:43:40.509885 systemd[1]: Starting systemd-sysctl.service... Dec 13 03:43:40.510072 systemd[1]: cri-containerd-31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043.scope: Deactivated successfully. Dec 13 03:43:40.513865 systemd[1]: Finished systemd-sysctl.service. Dec 13 03:43:40.728106 env[1551]: time="2024-12-13T03:43:40.727959829Z" level=info msg="shim disconnected" id=31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043 Dec 13 03:43:40.728106 env[1551]: time="2024-12-13T03:43:40.728059463Z" level=warning msg="cleaning up after shim disconnected" id=31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043 namespace=k8s.io Dec 13 03:43:40.728106 env[1551]: time="2024-12-13T03:43:40.728087561Z" level=info msg="cleaning up dead shim" Dec 13 03:43:40.743330 env[1551]: time="2024-12-13T03:43:40.743216238Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:43:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2234 runtime=io.containerd.runc.v2\n" Dec 13 03:43:41.670374 systemd-resolved[1499]: Clock change detected. Flushing caches. Dec 13 03:43:41.670582 systemd-timesyncd[1500]: Contacted time server [2600:3c00::f03c:93ff:fe5b:29d1]:123 (2.flatcar.pool.ntp.org). Dec 13 03:43:41.670711 systemd-timesyncd[1500]: Initial clock synchronization to Fri 2024-12-13 03:43:41.670227 UTC. Dec 13 03:43:41.931085 kubelet[1880]: E1213 03:43:41.930837 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:42.167568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043-rootfs.mount: Deactivated successfully. Dec 13 03:43:42.173608 env[1551]: time="2024-12-13T03:43:42.173587648Z" level=info msg="CreateContainer within sandbox \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 03:43:42.178880 env[1551]: time="2024-12-13T03:43:42.178863544Z" level=info msg="CreateContainer within sandbox \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7\"" Dec 13 03:43:42.179135 env[1551]: time="2024-12-13T03:43:42.179095733Z" level=info msg="StartContainer for \"43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7\"" Dec 13 03:43:42.189897 systemd[1]: Started cri-containerd-43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7.scope. Dec 13 03:43:42.205796 env[1551]: time="2024-12-13T03:43:42.205762012Z" level=info msg="StartContainer for \"43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7\" returns successfully" Dec 13 03:43:42.207801 systemd[1]: cri-containerd-43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7.scope: Deactivated successfully. Dec 13 03:43:42.225236 env[1551]: time="2024-12-13T03:43:42.225159936Z" level=info msg="shim disconnected" id=43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7 Dec 13 03:43:42.225236 env[1551]: time="2024-12-13T03:43:42.225206355Z" level=warning msg="cleaning up after shim disconnected" id=43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7 namespace=k8s.io Dec 13 03:43:42.225236 env[1551]: time="2024-12-13T03:43:42.225216872Z" level=info msg="cleaning up dead shim" Dec 13 03:43:42.231684 env[1551]: time="2024-12-13T03:43:42.231623656Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:43:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2353 runtime=io.containerd.runc.v2\n" Dec 13 03:43:42.931829 kubelet[1880]: E1213 03:43:42.931760 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:43.168061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7-rootfs.mount: Deactivated successfully. Dec 13 03:43:43.175566 env[1551]: time="2024-12-13T03:43:43.175546816Z" level=info msg="CreateContainer within sandbox \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 03:43:43.180144 env[1551]: time="2024-12-13T03:43:43.180101027Z" level=info msg="CreateContainer within sandbox \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84\"" Dec 13 03:43:43.180409 env[1551]: time="2024-12-13T03:43:43.180361526Z" level=info msg="StartContainer for \"4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84\"" Dec 13 03:43:43.188579 systemd[1]: Started cri-containerd-4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84.scope. Dec 13 03:43:43.200735 env[1551]: time="2024-12-13T03:43:43.200711340Z" level=info msg="StartContainer for \"4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84\" returns successfully" Dec 13 03:43:43.201169 systemd[1]: cri-containerd-4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84.scope: Deactivated successfully. Dec 13 03:43:43.225164 env[1551]: time="2024-12-13T03:43:43.225020317Z" level=info msg="shim disconnected" id=4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84 Dec 13 03:43:43.225164 env[1551]: time="2024-12-13T03:43:43.225123620Z" level=warning msg="cleaning up after shim disconnected" id=4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84 namespace=k8s.io Dec 13 03:43:43.225164 env[1551]: time="2024-12-13T03:43:43.225150011Z" level=info msg="cleaning up dead shim" Dec 13 03:43:43.240908 env[1551]: time="2024-12-13T03:43:43.240816300Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:43:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2405 runtime=io.containerd.runc.v2\n" Dec 13 03:43:43.933066 kubelet[1880]: E1213 03:43:43.932916 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:44.168181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84-rootfs.mount: Deactivated successfully. Dec 13 03:43:44.179239 env[1551]: time="2024-12-13T03:43:44.179219394Z" level=info msg="CreateContainer within sandbox \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 03:43:44.184781 env[1551]: time="2024-12-13T03:43:44.184708352Z" level=info msg="CreateContainer within sandbox \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4\"" Dec 13 03:43:44.185035 env[1551]: time="2024-12-13T03:43:44.184984286Z" level=info msg="StartContainer for \"9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4\"" Dec 13 03:43:44.194671 systemd[1]: Started cri-containerd-9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4.scope. Dec 13 03:43:44.209628 env[1551]: time="2024-12-13T03:43:44.209568702Z" level=info msg="StartContainer for \"9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4\" returns successfully" Dec 13 03:43:44.263693 kubelet[1880]: I1213 03:43:44.263653 1880 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 03:43:44.265935 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 03:43:44.438995 kernel: Initializing XFRM netlink socket Dec 13 03:43:44.452997 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 03:43:44.933799 kubelet[1880]: E1213 03:43:44.933716 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:45.933991 kubelet[1880]: E1213 03:43:45.933869 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:46.063325 systemd-networkd[1310]: cilium_host: Link UP Dec 13 03:43:46.063449 systemd-networkd[1310]: cilium_net: Link UP Dec 13 03:43:46.070592 systemd-networkd[1310]: cilium_net: Gained carrier Dec 13 03:43:46.077729 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 03:43:46.077800 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 03:43:46.077822 systemd-networkd[1310]: cilium_host: Gained carrier Dec 13 03:43:46.123667 systemd-networkd[1310]: cilium_vxlan: Link UP Dec 13 03:43:46.123672 systemd-networkd[1310]: cilium_vxlan: Gained carrier Dec 13 03:43:46.256939 kernel: NET: Registered PF_ALG protocol family Dec 13 03:43:46.772058 systemd-networkd[1310]: cilium_net: Gained IPv6LL Dec 13 03:43:46.772318 systemd-networkd[1310]: cilium_host: Gained IPv6LL Dec 13 03:43:46.794955 systemd-networkd[1310]: lxc_health: Link UP Dec 13 03:43:46.820913 systemd-networkd[1310]: lxc_health: Gained carrier Dec 13 03:43:46.821036 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 03:43:46.934768 kubelet[1880]: E1213 03:43:46.934713 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:47.260208 kubelet[1880]: I1213 03:43:47.260142 1880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dcl2v" podStartSLOduration=10.119542121 podStartE2EDuration="16.260131488s" podCreationTimestamp="2024-12-13 03:43:31 +0000 UTC" firstStartedPulling="2024-12-13 03:43:32.325967583 +0000 UTC m=+2.562552841" lastFinishedPulling="2024-12-13 03:43:38.466556954 +0000 UTC m=+8.703142208" observedRunningTime="2024-12-13 03:43:45.206383579 +0000 UTC m=+14.750619474" watchObservedRunningTime="2024-12-13 03:43:47.260131488 +0000 UTC m=+16.804367309" Dec 13 03:43:47.260338 kubelet[1880]: I1213 03:43:47.260318 1880 topology_manager.go:215] "Topology Admit Handler" podUID="f4460427-9f8d-49bc-b0ad-102dbeb42756" podNamespace="default" podName="nginx-deployment-85f456d6dd-l57b4" Dec 13 03:43:47.263516 systemd[1]: Created slice kubepods-besteffort-podf4460427_9f8d_49bc_b0ad_102dbeb42756.slice. Dec 13 03:43:47.324016 kubelet[1880]: I1213 03:43:47.323994 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjmkg\" (UniqueName: \"kubernetes.io/projected/f4460427-9f8d-49bc-b0ad-102dbeb42756-kube-api-access-hjmkg\") pod \"nginx-deployment-85f456d6dd-l57b4\" (UID: \"f4460427-9f8d-49bc-b0ad-102dbeb42756\") " pod="default/nginx-deployment-85f456d6dd-l57b4" Dec 13 03:43:47.566792 env[1551]: time="2024-12-13T03:43:47.566558358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-l57b4,Uid:f4460427-9f8d-49bc-b0ad-102dbeb42756,Namespace:default,Attempt:0,}" Dec 13 03:43:47.621578 systemd-networkd[1310]: lxc6b5a21165e12: Link UP Dec 13 03:43:47.643977 kernel: eth0: renamed from tmpace5c Dec 13 03:43:47.667562 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 03:43:47.667607 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6b5a21165e12: link becomes ready Dec 13 03:43:47.667618 systemd-networkd[1310]: lxc6b5a21165e12: Gained carrier Dec 13 03:43:47.796068 systemd-networkd[1310]: cilium_vxlan: Gained IPv6LL Dec 13 03:43:47.934953 kubelet[1880]: E1213 03:43:47.934927 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:48.757045 systemd-networkd[1310]: lxc_health: Gained IPv6LL Dec 13 03:43:48.935837 kubelet[1880]: E1213 03:43:48.935792 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:49.652105 systemd-networkd[1310]: lxc6b5a21165e12: Gained IPv6LL Dec 13 03:43:49.863107 env[1551]: time="2024-12-13T03:43:49.863047334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:43:49.863107 env[1551]: time="2024-12-13T03:43:49.863068130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:43:49.863107 env[1551]: time="2024-12-13T03:43:49.863077211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:43:49.863336 env[1551]: time="2024-12-13T03:43:49.863140200Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ace5cd284b9578284ac0488338569a8810f8f6912605e5b5f5e080e9cc2a7afc pid=3050 runtime=io.containerd.runc.v2 Dec 13 03:43:49.869432 systemd[1]: Started cri-containerd-ace5cd284b9578284ac0488338569a8810f8f6912605e5b5f5e080e9cc2a7afc.scope. Dec 13 03:43:49.891745 env[1551]: time="2024-12-13T03:43:49.891721646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-l57b4,Uid:f4460427-9f8d-49bc-b0ad-102dbeb42756,Namespace:default,Attempt:0,} returns sandbox id \"ace5cd284b9578284ac0488338569a8810f8f6912605e5b5f5e080e9cc2a7afc\"" Dec 13 03:43:49.892466 env[1551]: time="2024-12-13T03:43:49.892420487Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 03:43:49.936942 kubelet[1880]: E1213 03:43:49.936708 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:50.922230 kubelet[1880]: E1213 03:43:50.922112 1880 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:50.937672 kubelet[1880]: E1213 03:43:50.937582 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:51.937706 kubelet[1880]: E1213 03:43:51.937655 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:52.072884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3887094129.mount: Deactivated successfully. Dec 13 03:43:52.909463 env[1551]: time="2024-12-13T03:43:52.909410430Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:52.909965 env[1551]: time="2024-12-13T03:43:52.909948752Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:52.911243 env[1551]: time="2024-12-13T03:43:52.911209178Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:52.911948 env[1551]: time="2024-12-13T03:43:52.911907416Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:43:52.912378 env[1551]: time="2024-12-13T03:43:52.912329082Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 03:43:52.914088 env[1551]: time="2024-12-13T03:43:52.914048353Z" level=info msg="CreateContainer within sandbox \"ace5cd284b9578284ac0488338569a8810f8f6912605e5b5f5e080e9cc2a7afc\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 03:43:52.918676 env[1551]: time="2024-12-13T03:43:52.918608089Z" level=info msg="CreateContainer within sandbox \"ace5cd284b9578284ac0488338569a8810f8f6912605e5b5f5e080e9cc2a7afc\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"6f12531f9436f938d3e71b8090a17f819a1dc18baa1c0a6b4772fb5d98f21a5c\"" Dec 13 03:43:52.918905 env[1551]: time="2024-12-13T03:43:52.918869125Z" level=info msg="StartContainer for \"6f12531f9436f938d3e71b8090a17f819a1dc18baa1c0a6b4772fb5d98f21a5c\"" Dec 13 03:43:52.928303 systemd[1]: Started cri-containerd-6f12531f9436f938d3e71b8090a17f819a1dc18baa1c0a6b4772fb5d98f21a5c.scope. Dec 13 03:43:52.937908 kubelet[1880]: E1213 03:43:52.937893 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:52.939048 env[1551]: time="2024-12-13T03:43:52.939029306Z" level=info msg="StartContainer for \"6f12531f9436f938d3e71b8090a17f819a1dc18baa1c0a6b4772fb5d98f21a5c\" returns successfully" Dec 13 03:43:53.220507 kubelet[1880]: I1213 03:43:53.220273 1880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-l57b4" podStartSLOduration=3.199328229 podStartE2EDuration="6.220238991s" podCreationTimestamp="2024-12-13 03:43:47 +0000 UTC" firstStartedPulling="2024-12-13 03:43:49.892280938 +0000 UTC m=+19.436516764" lastFinishedPulling="2024-12-13 03:43:52.913191705 +0000 UTC m=+22.457427526" observedRunningTime="2024-12-13 03:43:53.219807216 +0000 UTC m=+22.764043094" watchObservedRunningTime="2024-12-13 03:43:53.220238991 +0000 UTC m=+22.764474853" Dec 13 03:43:53.938841 kubelet[1880]: E1213 03:43:53.938737 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:54.939448 kubelet[1880]: E1213 03:43:54.939340 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:55.940463 kubelet[1880]: E1213 03:43:55.940337 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:56.362616 kubelet[1880]: I1213 03:43:56.362401 1880 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 03:43:56.507072 update_engine[1547]: I1213 03:43:56.506949 1547 update_attempter.cc:509] Updating boot flags... Dec 13 03:43:56.941233 kubelet[1880]: E1213 03:43:56.941113 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:57.942079 kubelet[1880]: E1213 03:43:57.941959 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:58.943352 kubelet[1880]: E1213 03:43:58.943237 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:43:59.218386 kubelet[1880]: I1213 03:43:59.218174 1880 topology_manager.go:215] "Topology Admit Handler" podUID="3614af1f-29af-442c-95c4-099dd79077bf" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 03:43:59.232615 systemd[1]: Created slice kubepods-besteffort-pod3614af1f_29af_442c_95c4_099dd79077bf.slice. Dec 13 03:43:59.314429 kubelet[1880]: I1213 03:43:59.314310 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3614af1f-29af-442c-95c4-099dd79077bf-data\") pod \"nfs-server-provisioner-0\" (UID: \"3614af1f-29af-442c-95c4-099dd79077bf\") " pod="default/nfs-server-provisioner-0" Dec 13 03:43:59.314429 kubelet[1880]: I1213 03:43:59.314412 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cglfv\" (UniqueName: \"kubernetes.io/projected/3614af1f-29af-442c-95c4-099dd79077bf-kube-api-access-cglfv\") pod \"nfs-server-provisioner-0\" (UID: \"3614af1f-29af-442c-95c4-099dd79077bf\") " pod="default/nfs-server-provisioner-0" Dec 13 03:43:59.538712 env[1551]: time="2024-12-13T03:43:59.538465787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3614af1f-29af-442c-95c4-099dd79077bf,Namespace:default,Attempt:0,}" Dec 13 03:43:59.573753 systemd-networkd[1310]: lxcb50f175f1cfe: Link UP Dec 13 03:43:59.593996 kernel: eth0: renamed from tmp37b66 Dec 13 03:43:59.614595 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 03:43:59.614687 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb50f175f1cfe: link becomes ready Dec 13 03:43:59.615000 systemd-networkd[1310]: lxcb50f175f1cfe: Gained carrier Dec 13 03:43:59.781404 env[1551]: time="2024-12-13T03:43:59.781180655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:43:59.781404 env[1551]: time="2024-12-13T03:43:59.781275596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:43:59.781404 env[1551]: time="2024-12-13T03:43:59.781312711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:43:59.781902 env[1551]: time="2024-12-13T03:43:59.781721544Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/37b66871064359c68ea656164a89de41c925dd492dbf8b2a4bad0ac1ce5ceb04 pid=3231 runtime=io.containerd.runc.v2 Dec 13 03:43:59.794829 systemd[1]: Started cri-containerd-37b66871064359c68ea656164a89de41c925dd492dbf8b2a4bad0ac1ce5ceb04.scope. Dec 13 03:43:59.815638 env[1551]: time="2024-12-13T03:43:59.815589019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3614af1f-29af-442c-95c4-099dd79077bf,Namespace:default,Attempt:0,} returns sandbox id \"37b66871064359c68ea656164a89de41c925dd492dbf8b2a4bad0ac1ce5ceb04\"" Dec 13 03:43:59.816345 env[1551]: time="2024-12-13T03:43:59.816304302Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 03:43:59.944067 kubelet[1880]: E1213 03:43:59.943891 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:00.081711 systemd[1]: Started sshd@9-145.40.90.151:22-218.92.0.155:58641.service. Dec 13 03:44:00.944524 kubelet[1880]: E1213 03:44:00.944466 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:01.556069 systemd-networkd[1310]: lxcb50f175f1cfe: Gained IPv6LL Dec 13 03:44:01.739507 sshd[3264]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Dec 13 03:44:01.896453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2410257928.mount: Deactivated successfully. Dec 13 03:44:01.945519 kubelet[1880]: E1213 03:44:01.945477 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:02.946213 kubelet[1880]: E1213 03:44:02.946163 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:03.102122 env[1551]: time="2024-12-13T03:44:03.102056796Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:44:03.102662 env[1551]: time="2024-12-13T03:44:03.102648200Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:44:03.103660 env[1551]: time="2024-12-13T03:44:03.103608754Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:44:03.104617 env[1551]: time="2024-12-13T03:44:03.104592819Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:44:03.105167 env[1551]: time="2024-12-13T03:44:03.105108753Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 03:44:03.106823 env[1551]: time="2024-12-13T03:44:03.106809584Z" level=info msg="CreateContainer within sandbox \"37b66871064359c68ea656164a89de41c925dd492dbf8b2a4bad0ac1ce5ceb04\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 03:44:03.111367 env[1551]: time="2024-12-13T03:44:03.111336223Z" level=info msg="CreateContainer within sandbox \"37b66871064359c68ea656164a89de41c925dd492dbf8b2a4bad0ac1ce5ceb04\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"69978ee8abe373ad65b39853e4c0d5436f07968bd38a4e1a34c686a6dfa469e2\"" Dec 13 03:44:03.111710 env[1551]: time="2024-12-13T03:44:03.111698576Z" level=info msg="StartContainer for \"69978ee8abe373ad65b39853e4c0d5436f07968bd38a4e1a34c686a6dfa469e2\"" Dec 13 03:44:03.122302 systemd[1]: Started cri-containerd-69978ee8abe373ad65b39853e4c0d5436f07968bd38a4e1a34c686a6dfa469e2.scope. Dec 13 03:44:03.133931 env[1551]: time="2024-12-13T03:44:03.133895689Z" level=info msg="StartContainer for \"69978ee8abe373ad65b39853e4c0d5436f07968bd38a4e1a34c686a6dfa469e2\" returns successfully" Dec 13 03:44:03.251693 kubelet[1880]: I1213 03:44:03.251428 1880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=0.961521185 podStartE2EDuration="4.25139389s" podCreationTimestamp="2024-12-13 03:43:59 +0000 UTC" firstStartedPulling="2024-12-13 03:43:59.816166665 +0000 UTC m=+29.360402486" lastFinishedPulling="2024-12-13 03:44:03.106039371 +0000 UTC m=+32.650275191" observedRunningTime="2024-12-13 03:44:03.251146917 +0000 UTC m=+32.795382803" watchObservedRunningTime="2024-12-13 03:44:03.25139389 +0000 UTC m=+32.795629771" Dec 13 03:44:03.947044 kubelet[1880]: E1213 03:44:03.946909 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:04.011369 sshd[3264]: Failed password for root from 218.92.0.155 port 58641 ssh2 Dec 13 03:44:04.948102 kubelet[1880]: E1213 03:44:04.947988 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:05.949062 kubelet[1880]: E1213 03:44:05.948927 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:06.949966 kubelet[1880]: E1213 03:44:06.949821 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:07.950149 kubelet[1880]: E1213 03:44:07.950025 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:08.056308 sshd[3264]: Failed password for root from 218.92.0.155 port 58641 ssh2 Dec 13 03:44:08.950969 kubelet[1880]: E1213 03:44:08.950844 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:09.952159 kubelet[1880]: E1213 03:44:09.952051 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:10.922336 kubelet[1880]: E1213 03:44:10.922212 1880 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:10.952339 kubelet[1880]: E1213 03:44:10.952220 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:11.572792 sshd[3264]: Failed password for root from 218.92.0.155 port 58641 ssh2 Dec 13 03:44:11.952581 kubelet[1880]: E1213 03:44:11.952469 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:12.227702 sshd[3264]: Received disconnect from 218.92.0.155 port 58641:11: [preauth] Dec 13 03:44:12.227702 sshd[3264]: Disconnected from authenticating user root 218.92.0.155 port 58641 [preauth] Dec 13 03:44:12.228194 sshd[3264]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Dec 13 03:44:12.230366 systemd[1]: sshd@9-145.40.90.151:22-218.92.0.155:58641.service: Deactivated successfully. Dec 13 03:44:12.952749 kubelet[1880]: E1213 03:44:12.952644 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:12.984855 kubelet[1880]: I1213 03:44:12.984781 1880 topology_manager.go:215] "Topology Admit Handler" podUID="b0105ca9-4cc2-4c31-a7bf-ae3e280aba0a" podNamespace="default" podName="test-pod-1" Dec 13 03:44:12.998744 systemd[1]: Created slice kubepods-besteffort-podb0105ca9_4cc2_4c31_a7bf_ae3e280aba0a.slice. Dec 13 03:44:13.020741 kubelet[1880]: I1213 03:44:13.020653 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v995b\" (UniqueName: \"kubernetes.io/projected/b0105ca9-4cc2-4c31-a7bf-ae3e280aba0a-kube-api-access-v995b\") pod \"test-pod-1\" (UID: \"b0105ca9-4cc2-4c31-a7bf-ae3e280aba0a\") " pod="default/test-pod-1" Dec 13 03:44:13.021099 kubelet[1880]: I1213 03:44:13.020765 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7941e8e8-9c24-4025-861c-294679c72a72\" (UniqueName: \"kubernetes.io/nfs/b0105ca9-4cc2-4c31-a7bf-ae3e280aba0a-pvc-7941e8e8-9c24-4025-861c-294679c72a72\") pod \"test-pod-1\" (UID: \"b0105ca9-4cc2-4c31-a7bf-ae3e280aba0a\") " pod="default/test-pod-1" Dec 13 03:44:13.168020 kernel: FS-Cache: Loaded Dec 13 03:44:13.206138 kernel: RPC: Registered named UNIX socket transport module. Dec 13 03:44:13.206186 kernel: RPC: Registered udp transport module. Dec 13 03:44:13.206203 kernel: RPC: Registered tcp transport module. Dec 13 03:44:13.211022 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 03:44:13.264020 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 03:44:13.392686 kernel: NFS: Registering the id_resolver key type Dec 13 03:44:13.392824 kernel: Key type id_resolver registered Dec 13 03:44:13.392838 kernel: Key type id_legacy registered Dec 13 03:44:13.558895 nfsidmap[3361]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-4c4d6acc59' Dec 13 03:44:13.626779 nfsidmap[3362]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-4c4d6acc59' Dec 13 03:44:13.905610 env[1551]: time="2024-12-13T03:44:13.905481689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b0105ca9-4cc2-4c31-a7bf-ae3e280aba0a,Namespace:default,Attempt:0,}" Dec 13 03:44:13.928781 systemd-networkd[1310]: lxc87471a95b2b4: Link UP Dec 13 03:44:13.942938 kernel: eth0: renamed from tmp2596d Dec 13 03:44:13.953144 kubelet[1880]: E1213 03:44:13.953126 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:13.968002 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 03:44:13.968111 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc87471a95b2b4: link becomes ready Dec 13 03:44:13.975661 systemd-networkd[1310]: lxc87471a95b2b4: Gained carrier Dec 13 03:44:14.218498 env[1551]: time="2024-12-13T03:44:14.218242180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:44:14.218498 env[1551]: time="2024-12-13T03:44:14.218315556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:44:14.218498 env[1551]: time="2024-12-13T03:44:14.218341424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:44:14.218975 env[1551]: time="2024-12-13T03:44:14.218714969Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2596df7c3231644334a91a76899e0c579ac8e6ff6dfbf55ff80e9962359bb93d pid=3420 runtime=io.containerd.runc.v2 Dec 13 03:44:14.231106 systemd[1]: Started cri-containerd-2596df7c3231644334a91a76899e0c579ac8e6ff6dfbf55ff80e9962359bb93d.scope. Dec 13 03:44:14.251896 env[1551]: time="2024-12-13T03:44:14.251872355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b0105ca9-4cc2-4c31-a7bf-ae3e280aba0a,Namespace:default,Attempt:0,} returns sandbox id \"2596df7c3231644334a91a76899e0c579ac8e6ff6dfbf55ff80e9962359bb93d\"" Dec 13 03:44:14.252592 env[1551]: time="2024-12-13T03:44:14.252578974Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 03:44:14.607290 env[1551]: time="2024-12-13T03:44:14.607036607Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:44:14.610034 env[1551]: time="2024-12-13T03:44:14.609959099Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:44:14.615140 env[1551]: time="2024-12-13T03:44:14.615025100Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:44:14.620153 env[1551]: time="2024-12-13T03:44:14.620037169Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:44:14.622559 env[1551]: time="2024-12-13T03:44:14.622433030Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 03:44:14.628758 env[1551]: time="2024-12-13T03:44:14.628636010Z" level=info msg="CreateContainer within sandbox \"2596df7c3231644334a91a76899e0c579ac8e6ff6dfbf55ff80e9962359bb93d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 03:44:14.644574 env[1551]: time="2024-12-13T03:44:14.644526263Z" level=info msg="CreateContainer within sandbox \"2596df7c3231644334a91a76899e0c579ac8e6ff6dfbf55ff80e9962359bb93d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"c829be8693907c17f41d00f620cabb76f54b602adfa77bd2203cc1c82a917524\"" Dec 13 03:44:14.644880 env[1551]: time="2024-12-13T03:44:14.644847302Z" level=info msg="StartContainer for \"c829be8693907c17f41d00f620cabb76f54b602adfa77bd2203cc1c82a917524\"" Dec 13 03:44:14.653362 systemd[1]: Started cri-containerd-c829be8693907c17f41d00f620cabb76f54b602adfa77bd2203cc1c82a917524.scope. Dec 13 03:44:14.665488 env[1551]: time="2024-12-13T03:44:14.665435133Z" level=info msg="StartContainer for \"c829be8693907c17f41d00f620cabb76f54b602adfa77bd2203cc1c82a917524\" returns successfully" Dec 13 03:44:14.954384 kubelet[1880]: E1213 03:44:14.954261 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:15.247899 systemd[1]: Started sshd@10-145.40.90.151:22-186.96.145.241:43624.service. Dec 13 03:44:15.284877 kubelet[1880]: I1213 03:44:15.284723 1880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.911495067 podStartE2EDuration="16.2846854s" podCreationTimestamp="2024-12-13 03:43:59 +0000 UTC" firstStartedPulling="2024-12-13 03:44:14.252460227 +0000 UTC m=+43.796696048" lastFinishedPulling="2024-12-13 03:44:14.625650491 +0000 UTC m=+44.169886381" observedRunningTime="2024-12-13 03:44:15.284570024 +0000 UTC m=+44.828805918" watchObservedRunningTime="2024-12-13 03:44:15.2846854 +0000 UTC m=+44.828921271" Dec 13 03:44:15.444093 systemd-networkd[1310]: lxc87471a95b2b4: Gained IPv6LL Dec 13 03:44:15.567902 sshd[3522]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=186.96.145.241 user=root Dec 13 03:44:15.955373 kubelet[1880]: E1213 03:44:15.955238 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:16.956073 kubelet[1880]: E1213 03:44:16.955954 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:17.092391 sshd[3522]: Failed password for root from 186.96.145.241 port 43624 ssh2 Dec 13 03:44:17.632509 sshd[3522]: Connection closed by authenticating user root 186.96.145.241 port 43624 [preauth] Dec 13 03:44:17.635054 systemd[1]: sshd@10-145.40.90.151:22-186.96.145.241:43624.service: Deactivated successfully. Dec 13 03:44:17.957337 kubelet[1880]: E1213 03:44:17.957125 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:18.958376 kubelet[1880]: E1213 03:44:18.958268 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:19.958861 kubelet[1880]: E1213 03:44:19.958748 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:20.959750 kubelet[1880]: E1213 03:44:20.959640 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:21.611294 env[1551]: time="2024-12-13T03:44:21.611259864Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 03:44:21.614261 env[1551]: time="2024-12-13T03:44:21.614225909Z" level=info msg="StopContainer for \"9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4\" with timeout 2 (s)" Dec 13 03:44:21.614481 env[1551]: time="2024-12-13T03:44:21.614455450Z" level=info msg="Stop container \"9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4\" with signal terminated" Dec 13 03:44:21.617381 systemd-networkd[1310]: lxc_health: Link DOWN Dec 13 03:44:21.617400 systemd-networkd[1310]: lxc_health: Lost carrier Dec 13 03:44:21.674563 systemd[1]: cri-containerd-9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4.scope: Deactivated successfully. Dec 13 03:44:21.674879 systemd[1]: cri-containerd-9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4.scope: Consumed 4.459s CPU time. Dec 13 03:44:21.705652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4-rootfs.mount: Deactivated successfully. Dec 13 03:44:21.960645 kubelet[1880]: E1213 03:44:21.960533 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:22.819882 env[1551]: time="2024-12-13T03:44:22.819729299Z" level=info msg="shim disconnected" id=9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4 Dec 13 03:44:22.819882 env[1551]: time="2024-12-13T03:44:22.819844445Z" level=warning msg="cleaning up after shim disconnected" id=9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4 namespace=k8s.io Dec 13 03:44:22.819882 env[1551]: time="2024-12-13T03:44:22.819876935Z" level=info msg="cleaning up dead shim" Dec 13 03:44:22.828367 env[1551]: time="2024-12-13T03:44:22.828319154Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:44:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3565 runtime=io.containerd.runc.v2\n" Dec 13 03:44:22.829414 env[1551]: time="2024-12-13T03:44:22.829372118Z" level=info msg="StopContainer for \"9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4\" returns successfully" Dec 13 03:44:22.829751 env[1551]: time="2024-12-13T03:44:22.829707777Z" level=info msg="StopPodSandbox for \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\"" Dec 13 03:44:22.829751 env[1551]: time="2024-12-13T03:44:22.829741715Z" level=info msg="Container to stop \"ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:44:22.829751 env[1551]: time="2024-12-13T03:44:22.829750631Z" level=info msg="Container to stop \"4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:44:22.829846 env[1551]: time="2024-12-13T03:44:22.829756504Z" level=info msg="Container to stop \"9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:44:22.829846 env[1551]: time="2024-12-13T03:44:22.829762696Z" level=info msg="Container to stop \"31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:44:22.829846 env[1551]: time="2024-12-13T03:44:22.829768293Z" level=info msg="Container to stop \"43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 03:44:22.830961 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee-shm.mount: Deactivated successfully. Dec 13 03:44:22.832685 systemd[1]: cri-containerd-f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee.scope: Deactivated successfully. Dec 13 03:44:22.847683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee-rootfs.mount: Deactivated successfully. Dec 13 03:44:22.861301 env[1551]: time="2024-12-13T03:44:22.861243612Z" level=info msg="shim disconnected" id=f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee Dec 13 03:44:22.861301 env[1551]: time="2024-12-13T03:44:22.861289005Z" level=warning msg="cleaning up after shim disconnected" id=f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee namespace=k8s.io Dec 13 03:44:22.861301 env[1551]: time="2024-12-13T03:44:22.861301423Z" level=info msg="cleaning up dead shim" Dec 13 03:44:22.867185 env[1551]: time="2024-12-13T03:44:22.867124231Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:44:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3595 runtime=io.containerd.runc.v2\n" Dec 13 03:44:22.867425 env[1551]: time="2024-12-13T03:44:22.867375490Z" level=info msg="TearDown network for sandbox \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\" successfully" Dec 13 03:44:22.867425 env[1551]: time="2024-12-13T03:44:22.867396944Z" level=info msg="StopPodSandbox for \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\" returns successfully" Dec 13 03:44:22.961693 kubelet[1880]: E1213 03:44:22.961619 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:22.997238 kubelet[1880]: I1213 03:44:22.997160 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-cni-path\") pod \"89fb18dd-ec05-4608-98d9-9a6b038c1982\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " Dec 13 03:44:22.997238 kubelet[1880]: I1213 03:44:22.997246 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-cilium-run\") pod \"89fb18dd-ec05-4608-98d9-9a6b038c1982\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " Dec 13 03:44:22.997664 kubelet[1880]: I1213 03:44:22.997318 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89fb18dd-ec05-4608-98d9-9a6b038c1982-cilium-config-path\") pod \"89fb18dd-ec05-4608-98d9-9a6b038c1982\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " Dec 13 03:44:22.997664 kubelet[1880]: I1213 03:44:22.997368 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-host-proc-sys-net\") pod \"89fb18dd-ec05-4608-98d9-9a6b038c1982\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " Dec 13 03:44:22.997664 kubelet[1880]: I1213 03:44:22.997421 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv2wq\" (UniqueName: \"kubernetes.io/projected/89fb18dd-ec05-4608-98d9-9a6b038c1982-kube-api-access-jv2wq\") pod \"89fb18dd-ec05-4608-98d9-9a6b038c1982\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " Dec 13 03:44:22.997664 kubelet[1880]: I1213 03:44:22.997409 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "89fb18dd-ec05-4608-98d9-9a6b038c1982" (UID: "89fb18dd-ec05-4608-98d9-9a6b038c1982"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:22.997664 kubelet[1880]: I1213 03:44:22.997409 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-cni-path" (OuterVolumeSpecName: "cni-path") pod "89fb18dd-ec05-4608-98d9-9a6b038c1982" (UID: "89fb18dd-ec05-4608-98d9-9a6b038c1982"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:22.998271 kubelet[1880]: I1213 03:44:22.997472 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-host-proc-sys-kernel\") pod \"89fb18dd-ec05-4608-98d9-9a6b038c1982\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " Dec 13 03:44:22.998271 kubelet[1880]: I1213 03:44:22.997546 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "89fb18dd-ec05-4608-98d9-9a6b038c1982" (UID: "89fb18dd-ec05-4608-98d9-9a6b038c1982"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:22.998271 kubelet[1880]: I1213 03:44:22.997546 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "89fb18dd-ec05-4608-98d9-9a6b038c1982" (UID: "89fb18dd-ec05-4608-98d9-9a6b038c1982"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:22.998271 kubelet[1880]: I1213 03:44:22.997627 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-etc-cni-netd\") pod \"89fb18dd-ec05-4608-98d9-9a6b038c1982\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " Dec 13 03:44:22.998271 kubelet[1880]: I1213 03:44:22.997697 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89fb18dd-ec05-4608-98d9-9a6b038c1982-clustermesh-secrets\") pod \"89fb18dd-ec05-4608-98d9-9a6b038c1982\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " Dec 13 03:44:22.999129 kubelet[1880]: I1213 03:44:22.997766 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-bpf-maps\") pod \"89fb18dd-ec05-4608-98d9-9a6b038c1982\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " Dec 13 03:44:22.999129 kubelet[1880]: I1213 03:44:22.997746 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "89fb18dd-ec05-4608-98d9-9a6b038c1982" (UID: "89fb18dd-ec05-4608-98d9-9a6b038c1982"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:22.999129 kubelet[1880]: I1213 03:44:22.998499 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "89fb18dd-ec05-4608-98d9-9a6b038c1982" (UID: "89fb18dd-ec05-4608-98d9-9a6b038c1982"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:22.999129 kubelet[1880]: I1213 03:44:22.997822 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-xtables-lock\") pod \"89fb18dd-ec05-4608-98d9-9a6b038c1982\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " Dec 13 03:44:22.999129 kubelet[1880]: I1213 03:44:22.998701 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "89fb18dd-ec05-4608-98d9-9a6b038c1982" (UID: "89fb18dd-ec05-4608-98d9-9a6b038c1982"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:23.000037 kubelet[1880]: I1213 03:44:22.999018 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-lib-modules\") pod \"89fb18dd-ec05-4608-98d9-9a6b038c1982\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " Dec 13 03:44:23.000037 kubelet[1880]: I1213 03:44:22.999196 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89fb18dd-ec05-4608-98d9-9a6b038c1982-hubble-tls\") pod \"89fb18dd-ec05-4608-98d9-9a6b038c1982\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " Dec 13 03:44:23.000037 kubelet[1880]: I1213 03:44:22.999799 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "89fb18dd-ec05-4608-98d9-9a6b038c1982" (UID: "89fb18dd-ec05-4608-98d9-9a6b038c1982"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:23.000574 kubelet[1880]: I1213 03:44:23.000110 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-cilium-cgroup\") pod \"89fb18dd-ec05-4608-98d9-9a6b038c1982\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " Dec 13 03:44:23.000574 kubelet[1880]: I1213 03:44:23.000220 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-hostproc\") pod \"89fb18dd-ec05-4608-98d9-9a6b038c1982\" (UID: \"89fb18dd-ec05-4608-98d9-9a6b038c1982\") " Dec 13 03:44:23.000574 kubelet[1880]: I1213 03:44:23.000396 1880 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-host-proc-sys-kernel\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:23.000574 kubelet[1880]: I1213 03:44:23.000457 1880 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-etc-cni-netd\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:23.000574 kubelet[1880]: I1213 03:44:23.000507 1880 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-bpf-maps\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:23.000574 kubelet[1880]: I1213 03:44:23.000575 1880 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-xtables-lock\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:23.001595 kubelet[1880]: I1213 03:44:23.000622 1880 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-lib-modules\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:23.001595 kubelet[1880]: I1213 03:44:23.000668 1880 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-cni-path\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:23.001595 kubelet[1880]: I1213 03:44:23.000714 1880 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-cilium-run\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:23.001595 kubelet[1880]: I1213 03:44:23.000778 1880 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-host-proc-sys-net\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:23.001595 kubelet[1880]: I1213 03:44:23.000894 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-hostproc" (OuterVolumeSpecName: "hostproc") pod "89fb18dd-ec05-4608-98d9-9a6b038c1982" (UID: "89fb18dd-ec05-4608-98d9-9a6b038c1982"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:23.001595 kubelet[1880]: I1213 03:44:23.001024 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "89fb18dd-ec05-4608-98d9-9a6b038c1982" (UID: "89fb18dd-ec05-4608-98d9-9a6b038c1982"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:23.003485 kubelet[1880]: I1213 03:44:23.003468 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89fb18dd-ec05-4608-98d9-9a6b038c1982-kube-api-access-jv2wq" (OuterVolumeSpecName: "kube-api-access-jv2wq") pod "89fb18dd-ec05-4608-98d9-9a6b038c1982" (UID: "89fb18dd-ec05-4608-98d9-9a6b038c1982"). InnerVolumeSpecName "kube-api-access-jv2wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:23.003795 kubelet[1880]: I1213 03:44:23.003748 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89fb18dd-ec05-4608-98d9-9a6b038c1982-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "89fb18dd-ec05-4608-98d9-9a6b038c1982" (UID: "89fb18dd-ec05-4608-98d9-9a6b038c1982"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:23.003831 kubelet[1880]: I1213 03:44:23.003814 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89fb18dd-ec05-4608-98d9-9a6b038c1982-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "89fb18dd-ec05-4608-98d9-9a6b038c1982" (UID: "89fb18dd-ec05-4608-98d9-9a6b038c1982"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:23.003853 kubelet[1880]: I1213 03:44:23.003827 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89fb18dd-ec05-4608-98d9-9a6b038c1982-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "89fb18dd-ec05-4608-98d9-9a6b038c1982" (UID: "89fb18dd-ec05-4608-98d9-9a6b038c1982"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:23.004455 systemd[1]: var-lib-kubelet-pods-89fb18dd\x2dec05\x2d4608\x2d98d9\x2d9a6b038c1982-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djv2wq.mount: Deactivated successfully. Dec 13 03:44:23.005764 systemd[1]: var-lib-kubelet-pods-89fb18dd\x2dec05\x2d4608\x2d98d9\x2d9a6b038c1982-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 03:44:23.005822 systemd[1]: var-lib-kubelet-pods-89fb18dd\x2dec05\x2d4608\x2d98d9\x2d9a6b038c1982-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 03:44:23.101846 kubelet[1880]: I1213 03:44:23.101662 1880 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-hostproc\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:23.101846 kubelet[1880]: I1213 03:44:23.101712 1880 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89fb18dd-ec05-4608-98d9-9a6b038c1982-hubble-tls\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:23.101846 kubelet[1880]: I1213 03:44:23.101728 1880 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89fb18dd-ec05-4608-98d9-9a6b038c1982-cilium-cgroup\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:23.101846 kubelet[1880]: I1213 03:44:23.101746 1880 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89fb18dd-ec05-4608-98d9-9a6b038c1982-cilium-config-path\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:23.101846 kubelet[1880]: I1213 03:44:23.101767 1880 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jv2wq\" (UniqueName: \"kubernetes.io/projected/89fb18dd-ec05-4608-98d9-9a6b038c1982-kube-api-access-jv2wq\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:23.101846 kubelet[1880]: I1213 03:44:23.101783 1880 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89fb18dd-ec05-4608-98d9-9a6b038c1982-clustermesh-secrets\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:23.160185 systemd[1]: Removed slice kubepods-burstable-pod89fb18dd_ec05_4608_98d9_9a6b038c1982.slice. Dec 13 03:44:23.160252 systemd[1]: kubepods-burstable-pod89fb18dd_ec05_4608_98d9_9a6b038c1982.slice: Consumed 4.510s CPU time. Dec 13 03:44:23.298193 kubelet[1880]: I1213 03:44:23.298075 1880 scope.go:117] "RemoveContainer" containerID="9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4" Dec 13 03:44:23.301176 env[1551]: time="2024-12-13T03:44:23.301051687Z" level=info msg="RemoveContainer for \"9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4\"" Dec 13 03:44:23.306322 env[1551]: time="2024-12-13T03:44:23.306200913Z" level=info msg="RemoveContainer for \"9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4\" returns successfully" Dec 13 03:44:23.306688 kubelet[1880]: I1213 03:44:23.306636 1880 scope.go:117] "RemoveContainer" containerID="4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84" Dec 13 03:44:23.309127 env[1551]: time="2024-12-13T03:44:23.309025037Z" level=info msg="RemoveContainer for \"4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84\"" Dec 13 03:44:23.312683 env[1551]: time="2024-12-13T03:44:23.312577825Z" level=info msg="RemoveContainer for \"4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84\" returns successfully" Dec 13 03:44:23.312953 kubelet[1880]: I1213 03:44:23.312901 1880 scope.go:117] "RemoveContainer" containerID="43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7" Dec 13 03:44:23.315434 env[1551]: time="2024-12-13T03:44:23.315357231Z" level=info msg="RemoveContainer for \"43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7\"" Dec 13 03:44:23.319667 env[1551]: time="2024-12-13T03:44:23.319587474Z" level=info msg="RemoveContainer for \"43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7\" returns successfully" Dec 13 03:44:23.320060 kubelet[1880]: I1213 03:44:23.320009 1880 scope.go:117] "RemoveContainer" containerID="31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043" Dec 13 03:44:23.322686 env[1551]: time="2024-12-13T03:44:23.322609563Z" level=info msg="RemoveContainer for \"31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043\"" Dec 13 03:44:23.326526 env[1551]: time="2024-12-13T03:44:23.326422102Z" level=info msg="RemoveContainer for \"31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043\" returns successfully" Dec 13 03:44:23.326824 kubelet[1880]: I1213 03:44:23.326779 1880 scope.go:117] "RemoveContainer" containerID="ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4" Dec 13 03:44:23.329446 env[1551]: time="2024-12-13T03:44:23.329369695Z" level=info msg="RemoveContainer for \"ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4\"" Dec 13 03:44:23.333075 env[1551]: time="2024-12-13T03:44:23.332980307Z" level=info msg="RemoveContainer for \"ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4\" returns successfully" Dec 13 03:44:23.333416 kubelet[1880]: I1213 03:44:23.333370 1880 scope.go:117] "RemoveContainer" containerID="9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4" Dec 13 03:44:23.334035 env[1551]: time="2024-12-13T03:44:23.333819406Z" level=error msg="ContainerStatus for \"9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4\": not found" Dec 13 03:44:23.334398 kubelet[1880]: E1213 03:44:23.334340 1880 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4\": not found" containerID="9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4" Dec 13 03:44:23.334577 kubelet[1880]: I1213 03:44:23.334414 1880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4"} err="failed to get container status \"9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4\": rpc error: code = NotFound desc = an error occurred when try to find container \"9cd37ebac7ce9b62c4cecbb500a541d2a7ceb985a570d70ffb19d3932c5f6df4\": not found" Dec 13 03:44:23.334699 kubelet[1880]: I1213 03:44:23.334582 1880 scope.go:117] "RemoveContainer" containerID="4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84" Dec 13 03:44:23.335166 env[1551]: time="2024-12-13T03:44:23.335031360Z" level=error msg="ContainerStatus for \"4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84\": not found" Dec 13 03:44:23.335485 kubelet[1880]: E1213 03:44:23.335416 1880 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84\": not found" containerID="4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84" Dec 13 03:44:23.335634 kubelet[1880]: I1213 03:44:23.335481 1880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84"} err="failed to get container status \"4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84\": rpc error: code = NotFound desc = an error occurred when try to find container \"4341c05185af3ae794e84fb523a0cea603453b7da4f2299b320cf1b91acb6f84\": not found" Dec 13 03:44:23.335634 kubelet[1880]: I1213 03:44:23.335528 1880 scope.go:117] "RemoveContainer" containerID="43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7" Dec 13 03:44:23.336163 env[1551]: time="2024-12-13T03:44:23.336026002Z" level=error msg="ContainerStatus for \"43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7\": not found" Dec 13 03:44:23.336517 kubelet[1880]: E1213 03:44:23.336452 1880 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7\": not found" containerID="43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7" Dec 13 03:44:23.336675 kubelet[1880]: I1213 03:44:23.336510 1880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7"} err="failed to get container status \"43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7\": rpc error: code = NotFound desc = an error occurred when try to find container \"43c0acc1fb4daf40daeb6af21788cca0cf2bebf310426788138898be5f389fc7\": not found" Dec 13 03:44:23.336675 kubelet[1880]: I1213 03:44:23.336554 1880 scope.go:117] "RemoveContainer" containerID="31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043" Dec 13 03:44:23.337182 env[1551]: time="2024-12-13T03:44:23.337042211Z" level=error msg="ContainerStatus for \"31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043\": not found" Dec 13 03:44:23.337514 kubelet[1880]: E1213 03:44:23.337447 1880 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043\": not found" containerID="31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043" Dec 13 03:44:23.337670 kubelet[1880]: I1213 03:44:23.337507 1880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043"} err="failed to get container status \"31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043\": rpc error: code = NotFound desc = an error occurred when try to find container \"31f9c80b1a1264f8a6cb49991bd283aefe633ea48b9112d7f1257c5299a4f043\": not found" Dec 13 03:44:23.337670 kubelet[1880]: I1213 03:44:23.337554 1880 scope.go:117] "RemoveContainer" containerID="ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4" Dec 13 03:44:23.338224 env[1551]: time="2024-12-13T03:44:23.338085728Z" level=error msg="ContainerStatus for \"ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4\": not found" Dec 13 03:44:23.338594 kubelet[1880]: E1213 03:44:23.338543 1880 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4\": not found" containerID="ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4" Dec 13 03:44:23.338750 kubelet[1880]: I1213 03:44:23.338611 1880 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4"} err="failed to get container status \"ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef6d62d8df20e634cc49b877edcd0f479f234897c3b4e0d7df2f60db739670a4\": not found" Dec 13 03:44:23.810182 systemd[1]: Started sshd@11-145.40.90.151:22-92.255.85.189:43712.service. Dec 13 03:44:23.903253 kubelet[1880]: I1213 03:44:23.903140 1880 topology_manager.go:215] "Topology Admit Handler" podUID="bbd7f776-fd10-4cd3-aa7d-b8900ca88adb" podNamespace="kube-system" podName="cilium-operator-599987898-cpc7k" Dec 13 03:44:23.903253 kubelet[1880]: E1213 03:44:23.903249 1880 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="89fb18dd-ec05-4608-98d9-9a6b038c1982" containerName="mount-cgroup" Dec 13 03:44:23.903690 kubelet[1880]: E1213 03:44:23.903278 1880 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="89fb18dd-ec05-4608-98d9-9a6b038c1982" containerName="clean-cilium-state" Dec 13 03:44:23.903690 kubelet[1880]: E1213 03:44:23.903297 1880 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="89fb18dd-ec05-4608-98d9-9a6b038c1982" containerName="cilium-agent" Dec 13 03:44:23.903690 kubelet[1880]: E1213 03:44:23.903317 1880 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="89fb18dd-ec05-4608-98d9-9a6b038c1982" containerName="apply-sysctl-overwrites" Dec 13 03:44:23.903690 kubelet[1880]: E1213 03:44:23.903334 1880 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="89fb18dd-ec05-4608-98d9-9a6b038c1982" containerName="mount-bpf-fs" Dec 13 03:44:23.903690 kubelet[1880]: I1213 03:44:23.903379 1880 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fb18dd-ec05-4608-98d9-9a6b038c1982" containerName="cilium-agent" Dec 13 03:44:23.906828 kubelet[1880]: I1213 03:44:23.906730 1880 topology_manager.go:215] "Topology Admit Handler" podUID="3b7d599c-f789-4ce1-98cb-371d9f2ad3ea" podNamespace="kube-system" podName="cilium-xq9qs" Dec 13 03:44:23.918678 systemd[1]: Created slice kubepods-besteffort-podbbd7f776_fd10_4cd3_aa7d_b8900ca88adb.slice. Dec 13 03:44:23.931009 systemd[1]: Created slice kubepods-burstable-pod3b7d599c_f789_4ce1_98cb_371d9f2ad3ea.slice. Dec 13 03:44:23.962897 kubelet[1880]: E1213 03:44:23.962775 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:24.007555 kubelet[1880]: I1213 03:44:24.007444 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cilium-ipsec-secrets\") pod \"cilium-xq9qs\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " pod="kube-system/cilium-xq9qs" Dec 13 03:44:24.007555 kubelet[1880]: I1213 03:44:24.007539 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cni-path\") pod \"cilium-xq9qs\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " pod="kube-system/cilium-xq9qs" Dec 13 03:44:24.008013 kubelet[1880]: I1213 03:44:24.007610 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-xtables-lock\") pod \"cilium-xq9qs\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " pod="kube-system/cilium-xq9qs" Dec 13 03:44:24.008013 kubelet[1880]: I1213 03:44:24.007733 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8zf6\" (UniqueName: \"kubernetes.io/projected/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-kube-api-access-z8zf6\") pod \"cilium-xq9qs\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " pod="kube-system/cilium-xq9qs" Dec 13 03:44:24.008013 kubelet[1880]: I1213 03:44:24.007856 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bbd7f776-fd10-4cd3-aa7d-b8900ca88adb-cilium-config-path\") pod \"cilium-operator-599987898-cpc7k\" (UID: \"bbd7f776-fd10-4cd3-aa7d-b8900ca88adb\") " pod="kube-system/cilium-operator-599987898-cpc7k" Dec 13 03:44:24.008013 kubelet[1880]: I1213 03:44:24.007969 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xxzh\" (UniqueName: \"kubernetes.io/projected/bbd7f776-fd10-4cd3-aa7d-b8900ca88adb-kube-api-access-7xxzh\") pod \"cilium-operator-599987898-cpc7k\" (UID: \"bbd7f776-fd10-4cd3-aa7d-b8900ca88adb\") " pod="kube-system/cilium-operator-599987898-cpc7k" Dec 13 03:44:24.008429 kubelet[1880]: I1213 03:44:24.008040 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-hostproc\") pod \"cilium-xq9qs\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " pod="kube-system/cilium-xq9qs" Dec 13 03:44:24.008429 kubelet[1880]: I1213 03:44:24.008090 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cilium-cgroup\") pod \"cilium-xq9qs\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " pod="kube-system/cilium-xq9qs" Dec 13 03:44:24.008429 kubelet[1880]: I1213 03:44:24.008136 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-lib-modules\") pod \"cilium-xq9qs\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " pod="kube-system/cilium-xq9qs" Dec 13 03:44:24.008429 kubelet[1880]: I1213 03:44:24.008186 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cilium-config-path\") pod \"cilium-xq9qs\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " pod="kube-system/cilium-xq9qs" Dec 13 03:44:24.008429 kubelet[1880]: I1213 03:44:24.008240 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-host-proc-sys-net\") pod \"cilium-xq9qs\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " pod="kube-system/cilium-xq9qs" Dec 13 03:44:24.008429 kubelet[1880]: I1213 03:44:24.008289 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-host-proc-sys-kernel\") pod \"cilium-xq9qs\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " pod="kube-system/cilium-xq9qs" Dec 13 03:44:24.009038 kubelet[1880]: I1213 03:44:24.008337 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-etc-cni-netd\") pod \"cilium-xq9qs\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " pod="kube-system/cilium-xq9qs" Dec 13 03:44:24.009038 kubelet[1880]: I1213 03:44:24.008382 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-clustermesh-secrets\") pod \"cilium-xq9qs\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " pod="kube-system/cilium-xq9qs" Dec 13 03:44:24.009038 kubelet[1880]: I1213 03:44:24.008430 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-hubble-tls\") pod \"cilium-xq9qs\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " pod="kube-system/cilium-xq9qs" Dec 13 03:44:24.009038 kubelet[1880]: I1213 03:44:24.008484 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cilium-run\") pod \"cilium-xq9qs\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " pod="kube-system/cilium-xq9qs" Dec 13 03:44:24.009038 kubelet[1880]: I1213 03:44:24.008569 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-bpf-maps\") pod \"cilium-xq9qs\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " pod="kube-system/cilium-xq9qs" Dec 13 03:44:24.096522 kubelet[1880]: E1213 03:44:24.096284 1880 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-z8zf6 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-xq9qs" podUID="3b7d599c-f789-4ce1-98cb-371d9f2ad3ea" Dec 13 03:44:24.225720 env[1551]: time="2024-12-13T03:44:24.225587425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-cpc7k,Uid:bbd7f776-fd10-4cd3-aa7d-b8900ca88adb,Namespace:kube-system,Attempt:0,}" Dec 13 03:44:24.239918 env[1551]: time="2024-12-13T03:44:24.239892547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:44:24.239975 env[1551]: time="2024-12-13T03:44:24.239913790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:44:24.239975 env[1551]: time="2024-12-13T03:44:24.239921459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:44:24.240019 env[1551]: time="2024-12-13T03:44:24.239985196Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/804aaf83fa21d29f35b0f7ea7b6f332ccaa88fc895fd68eaef45b8df39920d05 pid=3625 runtime=io.containerd.runc.v2 Dec 13 03:44:24.246616 systemd[1]: Started cri-containerd-804aaf83fa21d29f35b0f7ea7b6f332ccaa88fc895fd68eaef45b8df39920d05.scope. Dec 13 03:44:24.271814 env[1551]: time="2024-12-13T03:44:24.271787068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-cpc7k,Uid:bbd7f776-fd10-4cd3-aa7d-b8900ca88adb,Namespace:kube-system,Attempt:0,} returns sandbox id \"804aaf83fa21d29f35b0f7ea7b6f332ccaa88fc895fd68eaef45b8df39920d05\"" Dec 13 03:44:24.272847 env[1551]: time="2024-12-13T03:44:24.272821557Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 03:44:24.412843 kubelet[1880]: I1213 03:44:24.412710 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cilium-ipsec-secrets\") pod \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " Dec 13 03:44:24.412843 kubelet[1880]: I1213 03:44:24.412808 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-hostproc\") pod \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " Dec 13 03:44:24.412843 kubelet[1880]: I1213 03:44:24.412862 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-host-proc-sys-net\") pod \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " Dec 13 03:44:24.413498 kubelet[1880]: I1213 03:44:24.412915 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-clustermesh-secrets\") pod \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " Dec 13 03:44:24.413498 kubelet[1880]: I1213 03:44:24.412993 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cilium-cgroup\") pod \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " Dec 13 03:44:24.413498 kubelet[1880]: I1213 03:44:24.413057 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-etc-cni-netd\") pod \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " Dec 13 03:44:24.413498 kubelet[1880]: I1213 03:44:24.413077 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea" (UID: "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:24.413498 kubelet[1880]: I1213 03:44:24.413078 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-hostproc" (OuterVolumeSpecName: "hostproc") pod "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea" (UID: "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:24.414065 kubelet[1880]: I1213 03:44:24.413112 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-host-proc-sys-kernel\") pod \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " Dec 13 03:44:24.414065 kubelet[1880]: I1213 03:44:24.413186 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea" (UID: "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:24.414065 kubelet[1880]: I1213 03:44:24.413220 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea" (UID: "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:24.414065 kubelet[1880]: I1213 03:44:24.413249 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea" (UID: "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:24.414065 kubelet[1880]: I1213 03:44:24.413275 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cilium-run\") pod \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " Dec 13 03:44:24.414613 kubelet[1880]: I1213 03:44:24.413338 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea" (UID: "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:24.414613 kubelet[1880]: I1213 03:44:24.413403 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cni-path\") pod \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " Dec 13 03:44:24.414613 kubelet[1880]: I1213 03:44:24.413465 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-xtables-lock\") pod \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " Dec 13 03:44:24.414613 kubelet[1880]: I1213 03:44:24.413486 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cni-path" (OuterVolumeSpecName: "cni-path") pod "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea" (UID: "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:24.414613 kubelet[1880]: I1213 03:44:24.413529 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8zf6\" (UniqueName: \"kubernetes.io/projected/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-kube-api-access-z8zf6\") pod \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " Dec 13 03:44:24.415163 kubelet[1880]: I1213 03:44:24.413550 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea" (UID: "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:24.415163 kubelet[1880]: I1213 03:44:24.413582 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-lib-modules\") pod \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " Dec 13 03:44:24.415163 kubelet[1880]: I1213 03:44:24.413628 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea" (UID: "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:24.415163 kubelet[1880]: I1213 03:44:24.413743 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cilium-config-path\") pod \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " Dec 13 03:44:24.415163 kubelet[1880]: I1213 03:44:24.413816 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-hubble-tls\") pod \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " Dec 13 03:44:24.415163 kubelet[1880]: I1213 03:44:24.413874 1880 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-bpf-maps\") pod \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\" (UID: \"3b7d599c-f789-4ce1-98cb-371d9f2ad3ea\") " Dec 13 03:44:24.415741 kubelet[1880]: I1213 03:44:24.414001 1880 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-hostproc\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:24.415741 kubelet[1880]: I1213 03:44:24.414038 1880 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-host-proc-sys-net\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:24.415741 kubelet[1880]: I1213 03:44:24.414067 1880 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cilium-cgroup\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:24.415741 kubelet[1880]: I1213 03:44:24.414091 1880 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-etc-cni-netd\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:24.415741 kubelet[1880]: I1213 03:44:24.414090 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea" (UID: "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 03:44:24.415741 kubelet[1880]: I1213 03:44:24.414114 1880 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-host-proc-sys-kernel\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:24.415741 kubelet[1880]: I1213 03:44:24.414218 1880 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cilium-run\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:24.415741 kubelet[1880]: I1213 03:44:24.414277 1880 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cni-path\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:24.416521 kubelet[1880]: I1213 03:44:24.414324 1880 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-xtables-lock\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:24.416521 kubelet[1880]: I1213 03:44:24.414371 1880 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-lib-modules\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:24.418483 kubelet[1880]: I1213 03:44:24.418386 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea" (UID: "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 03:44:24.419648 kubelet[1880]: I1213 03:44:24.419591 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea" (UID: "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:24.419648 kubelet[1880]: I1213 03:44:24.419630 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea" (UID: "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 03:44:24.419648 kubelet[1880]: I1213 03:44:24.419632 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-kube-api-access-z8zf6" (OuterVolumeSpecName: "kube-api-access-z8zf6") pod "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea" (UID: "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea"). InnerVolumeSpecName "kube-api-access-z8zf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:24.419800 kubelet[1880]: I1213 03:44:24.419765 1880 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea" (UID: "3b7d599c-f789-4ce1-98cb-371d9f2ad3ea"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 03:44:24.515231 kubelet[1880]: I1213 03:44:24.515112 1880 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-z8zf6\" (UniqueName: \"kubernetes.io/projected/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-kube-api-access-z8zf6\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:24.515231 kubelet[1880]: I1213 03:44:24.515185 1880 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cilium-config-path\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:24.515231 kubelet[1880]: I1213 03:44:24.515215 1880 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-hubble-tls\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:24.515231 kubelet[1880]: I1213 03:44:24.515241 1880 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-bpf-maps\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:24.515846 kubelet[1880]: I1213 03:44:24.515268 1880 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-cilium-ipsec-secrets\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:24.515846 kubelet[1880]: I1213 03:44:24.515293 1880 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea-clustermesh-secrets\") on node \"10.67.80.25\" DevicePath \"\"" Dec 13 03:44:24.963701 kubelet[1880]: E1213 03:44:24.963591 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:25.120686 systemd[1]: var-lib-kubelet-pods-3b7d599c\x2df789\x2d4ce1\x2d98cb\x2d371d9f2ad3ea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz8zf6.mount: Deactivated successfully. Dec 13 03:44:25.120754 systemd[1]: var-lib-kubelet-pods-3b7d599c\x2df789\x2d4ce1\x2d98cb\x2d371d9f2ad3ea-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 03:44:25.120789 systemd[1]: var-lib-kubelet-pods-3b7d599c\x2df789\x2d4ce1\x2d98cb\x2d371d9f2ad3ea-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 03:44:25.120821 systemd[1]: var-lib-kubelet-pods-3b7d599c\x2df789\x2d4ce1\x2d98cb\x2d371d9f2ad3ea-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 03:44:25.150737 kubelet[1880]: I1213 03:44:25.150684 1880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89fb18dd-ec05-4608-98d9-9a6b038c1982" path="/var/lib/kubelet/pods/89fb18dd-ec05-4608-98d9-9a6b038c1982/volumes" Dec 13 03:44:25.153681 systemd[1]: Removed slice kubepods-burstable-pod3b7d599c_f789_4ce1_98cb_371d9f2ad3ea.slice. Dec 13 03:44:25.338982 kubelet[1880]: I1213 03:44:25.338721 1880 topology_manager.go:215] "Topology Admit Handler" podUID="c0767a20-3250-4e22-89ea-a25a5d209c11" podNamespace="kube-system" podName="cilium-mhwg7" Dec 13 03:44:25.353096 systemd[1]: Created slice kubepods-burstable-podc0767a20_3250_4e22_89ea_a25a5d209c11.slice. Dec 13 03:44:25.523885 kubelet[1880]: I1213 03:44:25.523775 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0767a20-3250-4e22-89ea-a25a5d209c11-bpf-maps\") pod \"cilium-mhwg7\" (UID: \"c0767a20-3250-4e22-89ea-a25a5d209c11\") " pod="kube-system/cilium-mhwg7" Dec 13 03:44:25.523885 kubelet[1880]: I1213 03:44:25.523873 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0767a20-3250-4e22-89ea-a25a5d209c11-lib-modules\") pod \"cilium-mhwg7\" (UID: \"c0767a20-3250-4e22-89ea-a25a5d209c11\") " pod="kube-system/cilium-mhwg7" Dec 13 03:44:25.524282 kubelet[1880]: I1213 03:44:25.523981 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c0767a20-3250-4e22-89ea-a25a5d209c11-cilium-ipsec-secrets\") pod \"cilium-mhwg7\" (UID: \"c0767a20-3250-4e22-89ea-a25a5d209c11\") " pod="kube-system/cilium-mhwg7" Dec 13 03:44:25.524282 kubelet[1880]: I1213 03:44:25.524058 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0767a20-3250-4e22-89ea-a25a5d209c11-host-proc-sys-kernel\") pod \"cilium-mhwg7\" (UID: \"c0767a20-3250-4e22-89ea-a25a5d209c11\") " pod="kube-system/cilium-mhwg7" Dec 13 03:44:25.524282 kubelet[1880]: I1213 03:44:25.524130 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brh9f\" (UniqueName: \"kubernetes.io/projected/c0767a20-3250-4e22-89ea-a25a5d209c11-kube-api-access-brh9f\") pod \"cilium-mhwg7\" (UID: \"c0767a20-3250-4e22-89ea-a25a5d209c11\") " pod="kube-system/cilium-mhwg7" Dec 13 03:44:25.524282 kubelet[1880]: I1213 03:44:25.524204 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0767a20-3250-4e22-89ea-a25a5d209c11-cilium-cgroup\") pod \"cilium-mhwg7\" (UID: \"c0767a20-3250-4e22-89ea-a25a5d209c11\") " pod="kube-system/cilium-mhwg7" Dec 13 03:44:25.524282 kubelet[1880]: I1213 03:44:25.524257 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0767a20-3250-4e22-89ea-a25a5d209c11-cni-path\") pod \"cilium-mhwg7\" (UID: \"c0767a20-3250-4e22-89ea-a25a5d209c11\") " pod="kube-system/cilium-mhwg7" Dec 13 03:44:25.524820 kubelet[1880]: I1213 03:44:25.524319 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0767a20-3250-4e22-89ea-a25a5d209c11-clustermesh-secrets\") pod \"cilium-mhwg7\" (UID: \"c0767a20-3250-4e22-89ea-a25a5d209c11\") " pod="kube-system/cilium-mhwg7" Dec 13 03:44:25.524820 kubelet[1880]: I1213 03:44:25.524397 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0767a20-3250-4e22-89ea-a25a5d209c11-host-proc-sys-net\") pod \"cilium-mhwg7\" (UID: \"c0767a20-3250-4e22-89ea-a25a5d209c11\") " pod="kube-system/cilium-mhwg7" Dec 13 03:44:25.524820 kubelet[1880]: I1213 03:44:25.524469 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0767a20-3250-4e22-89ea-a25a5d209c11-hostproc\") pod \"cilium-mhwg7\" (UID: \"c0767a20-3250-4e22-89ea-a25a5d209c11\") " pod="kube-system/cilium-mhwg7" Dec 13 03:44:25.524820 kubelet[1880]: I1213 03:44:25.524525 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0767a20-3250-4e22-89ea-a25a5d209c11-cilium-config-path\") pod \"cilium-mhwg7\" (UID: \"c0767a20-3250-4e22-89ea-a25a5d209c11\") " pod="kube-system/cilium-mhwg7" Dec 13 03:44:25.524820 kubelet[1880]: I1213 03:44:25.524588 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0767a20-3250-4e22-89ea-a25a5d209c11-cilium-run\") pod \"cilium-mhwg7\" (UID: \"c0767a20-3250-4e22-89ea-a25a5d209c11\") " pod="kube-system/cilium-mhwg7" Dec 13 03:44:25.524820 kubelet[1880]: I1213 03:44:25.524648 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0767a20-3250-4e22-89ea-a25a5d209c11-xtables-lock\") pod \"cilium-mhwg7\" (UID: \"c0767a20-3250-4e22-89ea-a25a5d209c11\") " pod="kube-system/cilium-mhwg7" Dec 13 03:44:25.525418 kubelet[1880]: I1213 03:44:25.524700 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0767a20-3250-4e22-89ea-a25a5d209c11-etc-cni-netd\") pod \"cilium-mhwg7\" (UID: \"c0767a20-3250-4e22-89ea-a25a5d209c11\") " pod="kube-system/cilium-mhwg7" Dec 13 03:44:25.525418 kubelet[1880]: I1213 03:44:25.524759 1880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0767a20-3250-4e22-89ea-a25a5d209c11-hubble-tls\") pod \"cilium-mhwg7\" (UID: \"c0767a20-3250-4e22-89ea-a25a5d209c11\") " pod="kube-system/cilium-mhwg7" Dec 13 03:44:25.669710 env[1551]: time="2024-12-13T03:44:25.669565012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mhwg7,Uid:c0767a20-3250-4e22-89ea-a25a5d209c11,Namespace:kube-system,Attempt:0,}" Dec 13 03:44:25.684681 env[1551]: time="2024-12-13T03:44:25.684621630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:44:25.684681 env[1551]: time="2024-12-13T03:44:25.684641723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:44:25.684681 env[1551]: time="2024-12-13T03:44:25.684648242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:44:25.684793 env[1551]: time="2024-12-13T03:44:25.684735374Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a389a1a1e7c4a648f29aeda9cabb3f2dc78b99335b925c75d10deb4d5b46b527 pid=3671 runtime=io.containerd.runc.v2 Dec 13 03:44:25.690131 systemd[1]: Started cri-containerd-a389a1a1e7c4a648f29aeda9cabb3f2dc78b99335b925c75d10deb4d5b46b527.scope. Dec 13 03:44:25.701777 env[1551]: time="2024-12-13T03:44:25.701751595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mhwg7,Uid:c0767a20-3250-4e22-89ea-a25a5d209c11,Namespace:kube-system,Attempt:0,} returns sandbox id \"a389a1a1e7c4a648f29aeda9cabb3f2dc78b99335b925c75d10deb4d5b46b527\"" Dec 13 03:44:25.703022 env[1551]: time="2024-12-13T03:44:25.702976840Z" level=info msg="CreateContainer within sandbox \"a389a1a1e7c4a648f29aeda9cabb3f2dc78b99335b925c75d10deb4d5b46b527\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 03:44:25.706814 env[1551]: time="2024-12-13T03:44:25.706767463Z" level=info msg="CreateContainer within sandbox \"a389a1a1e7c4a648f29aeda9cabb3f2dc78b99335b925c75d10deb4d5b46b527\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e91d5635073b9a354a76278968803d414865b1239cdf8673b00b8e614c4862ee\"" Dec 13 03:44:25.706990 env[1551]: time="2024-12-13T03:44:25.706977249Z" level=info msg="StartContainer for \"e91d5635073b9a354a76278968803d414865b1239cdf8673b00b8e614c4862ee\"" Dec 13 03:44:25.715314 systemd[1]: Started cri-containerd-e91d5635073b9a354a76278968803d414865b1239cdf8673b00b8e614c4862ee.scope. Dec 13 03:44:25.732669 env[1551]: time="2024-12-13T03:44:25.732603335Z" level=info msg="StartContainer for \"e91d5635073b9a354a76278968803d414865b1239cdf8673b00b8e614c4862ee\" returns successfully" Dec 13 03:44:25.742245 systemd[1]: cri-containerd-e91d5635073b9a354a76278968803d414865b1239cdf8673b00b8e614c4862ee.scope: Deactivated successfully. Dec 13 03:44:25.768216 env[1551]: time="2024-12-13T03:44:25.768134145Z" level=info msg="shim disconnected" id=e91d5635073b9a354a76278968803d414865b1239cdf8673b00b8e614c4862ee Dec 13 03:44:25.768216 env[1551]: time="2024-12-13T03:44:25.768188598Z" level=warning msg="cleaning up after shim disconnected" id=e91d5635073b9a354a76278968803d414865b1239cdf8673b00b8e614c4862ee namespace=k8s.io Dec 13 03:44:25.768216 env[1551]: time="2024-12-13T03:44:25.768202385Z" level=info msg="cleaning up dead shim" Dec 13 03:44:25.776107 env[1551]: time="2024-12-13T03:44:25.776041007Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:44:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3756 runtime=io.containerd.runc.v2\n" Dec 13 03:44:25.963983 kubelet[1880]: E1213 03:44:25.963876 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:26.101408 kubelet[1880]: E1213 03:44:26.101254 1880 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 03:44:26.304201 sshd[3611]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.189 user=root Dec 13 03:44:26.304439 sshd[3611]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Dec 13 03:44:26.317811 env[1551]: time="2024-12-13T03:44:26.317715076Z" level=info msg="CreateContainer within sandbox \"a389a1a1e7c4a648f29aeda9cabb3f2dc78b99335b925c75d10deb4d5b46b527\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 03:44:26.323922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1870732297.mount: Deactivated successfully. Dec 13 03:44:26.325906 env[1551]: time="2024-12-13T03:44:26.325888128Z" level=info msg="CreateContainer within sandbox \"a389a1a1e7c4a648f29aeda9cabb3f2dc78b99335b925c75d10deb4d5b46b527\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8ef15b2ee38eb8fc8d87a19f6539df473447ee528947d6c3847c9572dcac1810\"" Dec 13 03:44:26.326325 env[1551]: time="2024-12-13T03:44:26.326258773Z" level=info msg="StartContainer for \"8ef15b2ee38eb8fc8d87a19f6539df473447ee528947d6c3847c9572dcac1810\"" Dec 13 03:44:26.377216 systemd[1]: Started cri-containerd-8ef15b2ee38eb8fc8d87a19f6539df473447ee528947d6c3847c9572dcac1810.scope. Dec 13 03:44:26.433498 env[1551]: time="2024-12-13T03:44:26.433390309Z" level=info msg="StartContainer for \"8ef15b2ee38eb8fc8d87a19f6539df473447ee528947d6c3847c9572dcac1810\" returns successfully" Dec 13 03:44:26.452455 systemd[1]: cri-containerd-8ef15b2ee38eb8fc8d87a19f6539df473447ee528947d6c3847c9572dcac1810.scope: Deactivated successfully. Dec 13 03:44:26.493204 env[1551]: time="2024-12-13T03:44:26.493061092Z" level=info msg="shim disconnected" id=8ef15b2ee38eb8fc8d87a19f6539df473447ee528947d6c3847c9572dcac1810 Dec 13 03:44:26.493597 env[1551]: time="2024-12-13T03:44:26.493222486Z" level=warning msg="cleaning up after shim disconnected" id=8ef15b2ee38eb8fc8d87a19f6539df473447ee528947d6c3847c9572dcac1810 namespace=k8s.io Dec 13 03:44:26.493597 env[1551]: time="2024-12-13T03:44:26.493276440Z" level=info msg="cleaning up dead shim" Dec 13 03:44:26.509984 env[1551]: time="2024-12-13T03:44:26.509870823Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:44:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3817 runtime=io.containerd.runc.v2\n" Dec 13 03:44:26.964294 kubelet[1880]: E1213 03:44:26.964243 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:27.017899 env[1551]: time="2024-12-13T03:44:27.017781393Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:44:27.019322 env[1551]: time="2024-12-13T03:44:27.019243199Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:44:27.022065 env[1551]: time="2024-12-13T03:44:27.021982769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:44:27.023328 env[1551]: time="2024-12-13T03:44:27.023240744Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 03:44:27.027445 env[1551]: time="2024-12-13T03:44:27.027354497Z" level=info msg="CreateContainer within sandbox \"804aaf83fa21d29f35b0f7ea7b6f332ccaa88fc895fd68eaef45b8df39920d05\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 03:44:27.038184 env[1551]: time="2024-12-13T03:44:27.038076668Z" level=info msg="CreateContainer within sandbox \"804aaf83fa21d29f35b0f7ea7b6f332ccaa88fc895fd68eaef45b8df39920d05\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"046413eb2e1169f8d728cc3b409f50c288d1ca12e2ba4515efde4a472ed38063\"" Dec 13 03:44:27.038875 env[1551]: time="2024-12-13T03:44:27.038787146Z" level=info msg="StartContainer for \"046413eb2e1169f8d728cc3b409f50c288d1ca12e2ba4515efde4a472ed38063\"" Dec 13 03:44:27.056433 systemd[1]: Started cri-containerd-046413eb2e1169f8d728cc3b409f50c288d1ca12e2ba4515efde4a472ed38063.scope. Dec 13 03:44:27.068735 env[1551]: time="2024-12-13T03:44:27.068680799Z" level=info msg="StartContainer for \"046413eb2e1169f8d728cc3b409f50c288d1ca12e2ba4515efde4a472ed38063\" returns successfully" Dec 13 03:44:27.125014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ef15b2ee38eb8fc8d87a19f6539df473447ee528947d6c3847c9572dcac1810-rootfs.mount: Deactivated successfully. Dec 13 03:44:27.150129 kubelet[1880]: I1213 03:44:27.150109 1880 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b7d599c-f789-4ce1-98cb-371d9f2ad3ea" path="/var/lib/kubelet/pods/3b7d599c-f789-4ce1-98cb-371d9f2ad3ea/volumes" Dec 13 03:44:27.327521 env[1551]: time="2024-12-13T03:44:27.327324060Z" level=info msg="CreateContainer within sandbox \"a389a1a1e7c4a648f29aeda9cabb3f2dc78b99335b925c75d10deb4d5b46b527\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 03:44:27.349144 env[1551]: time="2024-12-13T03:44:27.349003523Z" level=info msg="CreateContainer within sandbox \"a389a1a1e7c4a648f29aeda9cabb3f2dc78b99335b925c75d10deb4d5b46b527\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c8cb35c46d29249e5cb7acfba15ed1488ff133797e8f1eedcea8ac770fa76035\"" Dec 13 03:44:27.349904 env[1551]: time="2024-12-13T03:44:27.349836960Z" level=info msg="StartContainer for \"c8cb35c46d29249e5cb7acfba15ed1488ff133797e8f1eedcea8ac770fa76035\"" Dec 13 03:44:27.370884 kubelet[1880]: I1213 03:44:27.370738 1880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-cpc7k" podStartSLOduration=1.618261782 podStartE2EDuration="4.370702241s" podCreationTimestamp="2024-12-13 03:44:23 +0000 UTC" firstStartedPulling="2024-12-13 03:44:24.272439932 +0000 UTC m=+53.816675763" lastFinishedPulling="2024-12-13 03:44:27.024880339 +0000 UTC m=+56.569116222" observedRunningTime="2024-12-13 03:44:27.337320089 +0000 UTC m=+56.881555979" watchObservedRunningTime="2024-12-13 03:44:27.370702241 +0000 UTC m=+56.914938109" Dec 13 03:44:27.395660 systemd[1]: Started cri-containerd-c8cb35c46d29249e5cb7acfba15ed1488ff133797e8f1eedcea8ac770fa76035.scope. Dec 13 03:44:27.424703 env[1551]: time="2024-12-13T03:44:27.424663017Z" level=info msg="StartContainer for \"c8cb35c46d29249e5cb7acfba15ed1488ff133797e8f1eedcea8ac770fa76035\" returns successfully" Dec 13 03:44:27.427616 systemd[1]: cri-containerd-c8cb35c46d29249e5cb7acfba15ed1488ff133797e8f1eedcea8ac770fa76035.scope: Deactivated successfully. Dec 13 03:44:27.583965 env[1551]: time="2024-12-13T03:44:27.583672173Z" level=info msg="shim disconnected" id=c8cb35c46d29249e5cb7acfba15ed1488ff133797e8f1eedcea8ac770fa76035 Dec 13 03:44:27.583965 env[1551]: time="2024-12-13T03:44:27.583792583Z" level=warning msg="cleaning up after shim disconnected" id=c8cb35c46d29249e5cb7acfba15ed1488ff133797e8f1eedcea8ac770fa76035 namespace=k8s.io Dec 13 03:44:27.583965 env[1551]: time="2024-12-13T03:44:27.583827222Z" level=info msg="cleaning up dead shim" Dec 13 03:44:27.601247 env[1551]: time="2024-12-13T03:44:27.601105312Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:44:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3923 runtime=io.containerd.runc.v2\n" Dec 13 03:44:27.965315 kubelet[1880]: E1213 03:44:27.965195 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:28.120828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8cb35c46d29249e5cb7acfba15ed1488ff133797e8f1eedcea8ac770fa76035-rootfs.mount: Deactivated successfully. Dec 13 03:44:28.335823 env[1551]: time="2024-12-13T03:44:28.335589932Z" level=info msg="CreateContainer within sandbox \"a389a1a1e7c4a648f29aeda9cabb3f2dc78b99335b925c75d10deb4d5b46b527\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 03:44:28.340128 sshd[3611]: Failed password for root from 92.255.85.189 port 43712 ssh2 Dec 13 03:44:28.354148 env[1551]: time="2024-12-13T03:44:28.353895851Z" level=info msg="CreateContainer within sandbox \"a389a1a1e7c4a648f29aeda9cabb3f2dc78b99335b925c75d10deb4d5b46b527\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2607da1314bbf921b2dd3a7d04a72a0e95ed8ff760698eb40751a8f95493834a\"" Dec 13 03:44:28.355726 env[1551]: time="2024-12-13T03:44:28.355633741Z" level=info msg="StartContainer for \"2607da1314bbf921b2dd3a7d04a72a0e95ed8ff760698eb40751a8f95493834a\"" Dec 13 03:44:28.370860 systemd[1]: Started cri-containerd-2607da1314bbf921b2dd3a7d04a72a0e95ed8ff760698eb40751a8f95493834a.scope. Dec 13 03:44:28.383285 env[1551]: time="2024-12-13T03:44:28.383258943Z" level=info msg="StartContainer for \"2607da1314bbf921b2dd3a7d04a72a0e95ed8ff760698eb40751a8f95493834a\" returns successfully" Dec 13 03:44:28.383734 systemd[1]: cri-containerd-2607da1314bbf921b2dd3a7d04a72a0e95ed8ff760698eb40751a8f95493834a.scope: Deactivated successfully. Dec 13 03:44:28.393520 env[1551]: time="2024-12-13T03:44:28.393464405Z" level=info msg="shim disconnected" id=2607da1314bbf921b2dd3a7d04a72a0e95ed8ff760698eb40751a8f95493834a Dec 13 03:44:28.393520 env[1551]: time="2024-12-13T03:44:28.393491483Z" level=warning msg="cleaning up after shim disconnected" id=2607da1314bbf921b2dd3a7d04a72a0e95ed8ff760698eb40751a8f95493834a namespace=k8s.io Dec 13 03:44:28.393520 env[1551]: time="2024-12-13T03:44:28.393497096Z" level=info msg="cleaning up dead shim" Dec 13 03:44:28.397110 env[1551]: time="2024-12-13T03:44:28.397027846Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:44:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3978 runtime=io.containerd.runc.v2\n" Dec 13 03:44:28.965763 kubelet[1880]: E1213 03:44:28.965670 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:29.120881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2607da1314bbf921b2dd3a7d04a72a0e95ed8ff760698eb40751a8f95493834a-rootfs.mount: Deactivated successfully. Dec 13 03:44:29.345277 env[1551]: time="2024-12-13T03:44:29.345030036Z" level=info msg="CreateContainer within sandbox \"a389a1a1e7c4a648f29aeda9cabb3f2dc78b99335b925c75d10deb4d5b46b527\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 03:44:29.364156 env[1551]: time="2024-12-13T03:44:29.364019539Z" level=info msg="CreateContainer within sandbox \"a389a1a1e7c4a648f29aeda9cabb3f2dc78b99335b925c75d10deb4d5b46b527\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2bae26a27e5c3217d7042a41964a42ee19352258c7e3ae5efb80e535330f293c\"" Dec 13 03:44:29.365011 env[1551]: time="2024-12-13T03:44:29.364918307Z" level=info msg="StartContainer for \"2bae26a27e5c3217d7042a41964a42ee19352258c7e3ae5efb80e535330f293c\"" Dec 13 03:44:29.405839 systemd[1]: Started cri-containerd-2bae26a27e5c3217d7042a41964a42ee19352258c7e3ae5efb80e535330f293c.scope. Dec 13 03:44:29.447829 env[1551]: time="2024-12-13T03:44:29.447736608Z" level=info msg="StartContainer for \"2bae26a27e5c3217d7042a41964a42ee19352258c7e3ae5efb80e535330f293c\" returns successfully" Dec 13 03:44:29.639993 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 03:44:29.966066 kubelet[1880]: E1213 03:44:29.965817 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:30.351356 kubelet[1880]: I1213 03:44:30.351279 1880 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mhwg7" podStartSLOduration=5.35126554 podStartE2EDuration="5.35126554s" podCreationTimestamp="2024-12-13 03:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:44:30.351236855 +0000 UTC m=+59.895472679" watchObservedRunningTime="2024-12-13 03:44:30.35126554 +0000 UTC m=+59.895501364" Dec 13 03:44:30.503584 sshd[3611]: Connection closed by authenticating user root 92.255.85.189 port 43712 [preauth] Dec 13 03:44:30.506275 systemd[1]: sshd@11-145.40.90.151:22-92.255.85.189:43712.service: Deactivated successfully. Dec 13 03:44:30.922577 kubelet[1880]: E1213 03:44:30.922466 1880 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:30.966403 kubelet[1880]: E1213 03:44:30.966353 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:31.019385 env[1551]: time="2024-12-13T03:44:31.019323545Z" level=info msg="StopPodSandbox for \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\"" Dec 13 03:44:31.019654 env[1551]: time="2024-12-13T03:44:31.019418575Z" level=info msg="TearDown network for sandbox \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\" successfully" Dec 13 03:44:31.019654 env[1551]: time="2024-12-13T03:44:31.019438704Z" level=info msg="StopPodSandbox for \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\" returns successfully" Dec 13 03:44:31.019654 env[1551]: time="2024-12-13T03:44:31.019642652Z" level=info msg="RemovePodSandbox for \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\"" Dec 13 03:44:31.019753 env[1551]: time="2024-12-13T03:44:31.019666437Z" level=info msg="Forcibly stopping sandbox \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\"" Dec 13 03:44:31.019753 env[1551]: time="2024-12-13T03:44:31.019721688Z" level=info msg="TearDown network for sandbox \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\" successfully" Dec 13 03:44:31.020981 env[1551]: time="2024-12-13T03:44:31.020968067Z" level=info msg="RemovePodSandbox \"f01faaa016aa9638d0c9972f02e7a70087b532afa6bf6127d83aa6724ff138ee\" returns successfully" Dec 13 03:44:31.967316 kubelet[1880]: E1213 03:44:31.967193 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:32.649886 systemd-networkd[1310]: lxc_health: Link UP Dec 13 03:44:32.667847 systemd-networkd[1310]: lxc_health: Gained carrier Dec 13 03:44:32.668017 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 03:44:32.968397 kubelet[1880]: E1213 03:44:32.968254 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:33.968528 kubelet[1880]: E1213 03:44:33.968486 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:34.132042 systemd-networkd[1310]: lxc_health: Gained IPv6LL Dec 13 03:44:34.968767 kubelet[1880]: E1213 03:44:34.968666 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:35.969966 kubelet[1880]: E1213 03:44:35.969844 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:36.971074 kubelet[1880]: E1213 03:44:36.970954 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:37.971899 kubelet[1880]: E1213 03:44:37.971702 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:38.972379 kubelet[1880]: E1213 03:44:38.972262 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 03:44:39.973107 kubelet[1880]: E1213 03:44:39.972990 1880 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"