Jul 2 11:09:33.564901 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 11:09:33.564914 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 11:09:33.564921 kernel: BIOS-provided physical RAM map: Jul 2 11:09:33.564925 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jul 2 11:09:33.564928 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jul 2 11:09:33.564932 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jul 2 11:09:33.564936 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jul 2 11:09:33.564940 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jul 2 11:09:33.564944 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000080e53fff] usable Jul 2 11:09:33.564948 kernel: BIOS-e820: [mem 0x0000000080e54000-0x0000000080e54fff] ACPI NVS Jul 2 11:09:33.564952 kernel: BIOS-e820: [mem 0x0000000080e55000-0x0000000080e55fff] reserved Jul 2 11:09:33.564956 kernel: BIOS-e820: [mem 0x0000000080e56000-0x000000008afcafff] usable Jul 2 11:09:33.564960 kernel: BIOS-e820: [mem 0x000000008afcb000-0x000000008c0affff] reserved Jul 2 11:09:33.564964 kernel: BIOS-e820: [mem 0x000000008c0b0000-0x000000008c238fff] usable Jul 2 11:09:33.564969 kernel: BIOS-e820: [mem 0x000000008c239000-0x000000008c66afff] ACPI NVS Jul 2 11:09:33.564974 kernel: BIOS-e820: [mem 0x000000008c66b000-0x000000008eefefff] reserved Jul 2 11:09:33.564978 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jul 2 11:09:33.564982 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jul 2 11:09:33.564986 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 2 11:09:33.564990 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jul 2 11:09:33.564994 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jul 2 11:09:33.564998 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 2 11:09:33.565003 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jul 2 11:09:33.565007 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jul 2 11:09:33.565011 kernel: NX (Execute Disable) protection: active Jul 2 11:09:33.565015 kernel: SMBIOS 3.2.1 present. Jul 2 11:09:33.565020 kernel: DMI: Supermicro SYS-5019C-MR/X11SCM-F, BIOS 1.9 09/16/2022 Jul 2 11:09:33.565024 kernel: tsc: Detected 3400.000 MHz processor Jul 2 11:09:33.565028 kernel: tsc: Detected 3399.906 MHz TSC Jul 2 11:09:33.565033 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 11:09:33.565037 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 11:09:33.565042 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jul 2 11:09:33.565046 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 11:09:33.565050 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jul 2 11:09:33.565055 kernel: Using GB pages for direct mapping Jul 2 11:09:33.565059 kernel: ACPI: Early table checksum verification disabled Jul 2 11:09:33.565064 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jul 2 11:09:33.565068 kernel: ACPI: XSDT 0x000000008C54C0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jul 2 11:09:33.565072 kernel: ACPI: FACP 0x000000008C588670 000114 (v06 01072009 AMI 00010013) Jul 2 11:09:33.565077 kernel: ACPI: DSDT 0x000000008C54C268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jul 2 11:09:33.565083 kernel: ACPI: FACS 0x000000008C66AF80 000040 Jul 2 11:09:33.565088 kernel: ACPI: APIC 0x000000008C588788 00012C (v04 01072009 AMI 00010013) Jul 2 11:09:33.565093 kernel: ACPI: FPDT 0x000000008C5888B8 000044 (v01 01072009 AMI 00010013) Jul 2 11:09:33.565098 kernel: ACPI: FIDT 0x000000008C588900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jul 2 11:09:33.565102 kernel: ACPI: MCFG 0x000000008C5889A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jul 2 11:09:33.565107 kernel: ACPI: SPMI 0x000000008C5889E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jul 2 11:09:33.565112 kernel: ACPI: SSDT 0x000000008C588A28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jul 2 11:09:33.565116 kernel: ACPI: SSDT 0x000000008C58A548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jul 2 11:09:33.565121 kernel: ACPI: SSDT 0x000000008C58D710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jul 2 11:09:33.565126 kernel: ACPI: HPET 0x000000008C58FA40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 11:09:33.565131 kernel: ACPI: SSDT 0x000000008C58FA78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jul 2 11:09:33.565136 kernel: ACPI: SSDT 0x000000008C590A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jul 2 11:09:33.565140 kernel: ACPI: UEFI 0x000000008C591320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 11:09:33.565145 kernel: ACPI: LPIT 0x000000008C591368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 11:09:33.565149 kernel: ACPI: SSDT 0x000000008C591400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jul 2 11:09:33.565154 kernel: ACPI: SSDT 0x000000008C593BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jul 2 11:09:33.565159 kernel: ACPI: DBGP 0x000000008C5950C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 11:09:33.565163 kernel: ACPI: DBG2 0x000000008C595100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jul 2 11:09:33.565169 kernel: ACPI: SSDT 0x000000008C595158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jul 2 11:09:33.565173 kernel: ACPI: DMAR 0x000000008C596CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jul 2 11:09:33.565178 kernel: ACPI: SSDT 0x000000008C596D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jul 2 11:09:33.565183 kernel: ACPI: TPM2 0x000000008C596E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jul 2 11:09:33.565187 kernel: ACPI: SSDT 0x000000008C596EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jul 2 11:09:33.565192 kernel: ACPI: WSMT 0x000000008C597C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jul 2 11:09:33.565196 kernel: ACPI: EINJ 0x000000008C597C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jul 2 11:09:33.565201 kernel: ACPI: ERST 0x000000008C597D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jul 2 11:09:33.565206 kernel: ACPI: BERT 0x000000008C597FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jul 2 11:09:33.565211 kernel: ACPI: HEST 0x000000008C597FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jul 2 11:09:33.565216 kernel: ACPI: SSDT 0x000000008C598278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jul 2 11:09:33.565220 kernel: ACPI: Reserving FACP table memory at [mem 0x8c588670-0x8c588783] Jul 2 11:09:33.565225 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54c268-0x8c58866b] Jul 2 11:09:33.565230 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66af80-0x8c66afbf] Jul 2 11:09:33.565234 kernel: ACPI: Reserving APIC table memory at [mem 0x8c588788-0x8c5888b3] Jul 2 11:09:33.565239 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c5888b8-0x8c5888fb] Jul 2 11:09:33.565243 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c588900-0x8c58899b] Jul 2 11:09:33.565249 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c5889a0-0x8c5889db] Jul 2 11:09:33.565253 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c5889e0-0x8c588a20] Jul 2 11:09:33.565258 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c588a28-0x8c58a543] Jul 2 11:09:33.565263 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58a548-0x8c58d70d] Jul 2 11:09:33.565267 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d710-0x8c58fa3a] Jul 2 11:09:33.565272 kernel: ACPI: Reserving HPET table memory at [mem 0x8c58fa40-0x8c58fa77] Jul 2 11:09:33.565276 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58fa78-0x8c590a25] Jul 2 11:09:33.565281 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590a28-0x8c59131b] Jul 2 11:09:33.565286 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c591320-0x8c591361] Jul 2 11:09:33.565291 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c591368-0x8c5913fb] Jul 2 11:09:33.565296 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591400-0x8c593bdd] Jul 2 11:09:33.565300 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593be0-0x8c5950c1] Jul 2 11:09:33.565305 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5950c8-0x8c5950fb] Jul 2 11:09:33.565309 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c595100-0x8c595153] Jul 2 11:09:33.565314 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595158-0x8c596cbe] Jul 2 11:09:33.565319 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c596cc0-0x8c596d2f] Jul 2 11:09:33.565323 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596d30-0x8c596e73] Jul 2 11:09:33.565328 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c596e78-0x8c596eab] Jul 2 11:09:33.565333 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596eb0-0x8c597c3e] Jul 2 11:09:33.565338 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c597c40-0x8c597c67] Jul 2 11:09:33.565342 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c597c68-0x8c597d97] Jul 2 11:09:33.565347 kernel: ACPI: Reserving ERST table memory at [mem 0x8c597d98-0x8c597fc7] Jul 2 11:09:33.565352 kernel: ACPI: Reserving BERT table memory at [mem 0x8c597fc8-0x8c597ff7] Jul 2 11:09:33.565356 kernel: ACPI: Reserving HEST table memory at [mem 0x8c597ff8-0x8c598273] Jul 2 11:09:33.565361 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598278-0x8c5983d9] Jul 2 11:09:33.565366 kernel: No NUMA configuration found Jul 2 11:09:33.565370 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jul 2 11:09:33.565376 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Jul 2 11:09:33.565380 kernel: Zone ranges: Jul 2 11:09:33.565385 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 11:09:33.565390 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 2 11:09:33.565394 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jul 2 11:09:33.565399 kernel: Movable zone start for each node Jul 2 11:09:33.565403 kernel: Early memory node ranges Jul 2 11:09:33.565408 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jul 2 11:09:33.565413 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jul 2 11:09:33.565417 kernel: node 0: [mem 0x0000000040400000-0x0000000080e53fff] Jul 2 11:09:33.565423 kernel: node 0: [mem 0x0000000080e56000-0x000000008afcafff] Jul 2 11:09:33.565427 kernel: node 0: [mem 0x000000008c0b0000-0x000000008c238fff] Jul 2 11:09:33.565432 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jul 2 11:09:33.565437 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jul 2 11:09:33.565441 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jul 2 11:09:33.565446 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 11:09:33.565454 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jul 2 11:09:33.565460 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 2 11:09:33.565465 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jul 2 11:09:33.565470 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jul 2 11:09:33.565475 kernel: On node 0, zone DMA32: 11462 pages in unavailable ranges Jul 2 11:09:33.565501 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jul 2 11:09:33.565506 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jul 2 11:09:33.565511 kernel: ACPI: PM-Timer IO Port: 0x1808 Jul 2 11:09:33.565517 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 2 11:09:33.565540 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 2 11:09:33.565545 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 2 11:09:33.565550 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 2 11:09:33.565555 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 2 11:09:33.565560 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 2 11:09:33.565565 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 2 11:09:33.565570 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 2 11:09:33.565575 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 2 11:09:33.565580 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 2 11:09:33.565585 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 2 11:09:33.565590 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 2 11:09:33.565595 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 2 11:09:33.565600 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 2 11:09:33.565605 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 2 11:09:33.565610 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 2 11:09:33.565615 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jul 2 11:09:33.565620 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 11:09:33.565625 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 11:09:33.565630 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 11:09:33.565635 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 11:09:33.565640 kernel: TSC deadline timer available Jul 2 11:09:33.565645 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jul 2 11:09:33.565650 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jul 2 11:09:33.565655 kernel: Booting paravirtualized kernel on bare hardware Jul 2 11:09:33.565660 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 11:09:33.565665 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Jul 2 11:09:33.565670 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Jul 2 11:09:33.565675 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Jul 2 11:09:33.565680 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 2 11:09:33.565686 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232413 Jul 2 11:09:33.565691 kernel: Policy zone: Normal Jul 2 11:09:33.565696 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 11:09:33.565701 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 11:09:33.565706 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jul 2 11:09:33.565711 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jul 2 11:09:33.565717 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 11:09:33.565722 kernel: Memory: 32722596K/33452972K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 730116K reserved, 0K cma-reserved) Jul 2 11:09:33.565728 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 2 11:09:33.565733 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 11:09:33.565738 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 11:09:33.565743 kernel: rcu: Hierarchical RCU implementation. Jul 2 11:09:33.565748 kernel: rcu: RCU event tracing is enabled. Jul 2 11:09:33.565753 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 2 11:09:33.565758 kernel: Rude variant of Tasks RCU enabled. Jul 2 11:09:33.565763 kernel: Tracing variant of Tasks RCU enabled. Jul 2 11:09:33.565769 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 11:09:33.565774 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 2 11:09:33.565778 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jul 2 11:09:33.565783 kernel: random: crng init done Jul 2 11:09:33.565788 kernel: Console: colour dummy device 80x25 Jul 2 11:09:33.565793 kernel: printk: console [tty0] enabled Jul 2 11:09:33.565798 kernel: printk: console [ttyS1] enabled Jul 2 11:09:33.565803 kernel: ACPI: Core revision 20210730 Jul 2 11:09:33.565808 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jul 2 11:09:33.565813 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 11:09:33.565819 kernel: DMAR: Host address width 39 Jul 2 11:09:33.565824 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jul 2 11:09:33.565829 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jul 2 11:09:33.565834 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jul 2 11:09:33.565839 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jul 2 11:09:33.565844 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jul 2 11:09:33.565848 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jul 2 11:09:33.565853 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jul 2 11:09:33.565858 kernel: x2apic enabled Jul 2 11:09:33.565864 kernel: Switched APIC routing to cluster x2apic. Jul 2 11:09:33.565869 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jul 2 11:09:33.565874 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jul 2 11:09:33.565879 kernel: CPU0: Thermal monitoring enabled (TM1) Jul 2 11:09:33.565884 kernel: process: using mwait in idle threads Jul 2 11:09:33.565889 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 11:09:33.565894 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 11:09:33.565899 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 11:09:33.565904 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 11:09:33.565910 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 2 11:09:33.565914 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 2 11:09:33.565919 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 2 11:09:33.565924 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 11:09:33.565929 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 2 11:09:33.565934 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 2 11:09:33.565939 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 11:09:33.565943 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 11:09:33.565948 kernel: TAA: Mitigation: TSX disabled Jul 2 11:09:33.565953 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jul 2 11:09:33.565958 kernel: SRBDS: Mitigation: Microcode Jul 2 11:09:33.565964 kernel: GDS: Vulnerable: No microcode Jul 2 11:09:33.565969 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 11:09:33.565974 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 11:09:33.565979 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 11:09:33.565983 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 2 11:09:33.565988 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 2 11:09:33.565993 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 11:09:33.565998 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 2 11:09:33.566003 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 2 11:09:33.566008 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jul 2 11:09:33.566013 kernel: Freeing SMP alternatives memory: 32K Jul 2 11:09:33.566018 kernel: pid_max: default: 32768 minimum: 301 Jul 2 11:09:33.566023 kernel: LSM: Security Framework initializing Jul 2 11:09:33.566028 kernel: SELinux: Initializing. Jul 2 11:09:33.566033 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 11:09:33.566038 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 11:09:33.566043 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jul 2 11:09:33.566048 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 2 11:09:33.566053 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jul 2 11:09:33.566058 kernel: ... version: 4 Jul 2 11:09:33.566062 kernel: ... bit width: 48 Jul 2 11:09:33.566067 kernel: ... generic registers: 4 Jul 2 11:09:33.566073 kernel: ... value mask: 0000ffffffffffff Jul 2 11:09:33.566078 kernel: ... max period: 00007fffffffffff Jul 2 11:09:33.566083 kernel: ... fixed-purpose events: 3 Jul 2 11:09:33.566088 kernel: ... event mask: 000000070000000f Jul 2 11:09:33.566093 kernel: signal: max sigframe size: 2032 Jul 2 11:09:33.566098 kernel: rcu: Hierarchical SRCU implementation. Jul 2 11:09:33.566103 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jul 2 11:09:33.566108 kernel: smp: Bringing up secondary CPUs ... Jul 2 11:09:33.566113 kernel: x86: Booting SMP configuration: Jul 2 11:09:33.566118 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Jul 2 11:09:33.566124 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 11:09:33.566129 kernel: #9 #10 #11 #12 #13 #14 #15 Jul 2 11:09:33.566133 kernel: smp: Brought up 1 node, 16 CPUs Jul 2 11:09:33.566138 kernel: smpboot: Max logical packages: 1 Jul 2 11:09:33.566143 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jul 2 11:09:33.566148 kernel: devtmpfs: initialized Jul 2 11:09:33.566153 kernel: x86/mm: Memory block size: 128MB Jul 2 11:09:33.566158 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x80e54000-0x80e54fff] (4096 bytes) Jul 2 11:09:33.566164 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c239000-0x8c66afff] (4399104 bytes) Jul 2 11:09:33.566169 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 11:09:33.566174 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 2 11:09:33.566179 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 11:09:33.566184 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 11:09:33.566189 kernel: audit: initializing netlink subsys (disabled) Jul 2 11:09:33.566194 kernel: audit: type=2000 audit(1719918568.041:1): state=initialized audit_enabled=0 res=1 Jul 2 11:09:33.566199 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 11:09:33.566204 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 11:09:33.566210 kernel: cpuidle: using governor menu Jul 2 11:09:33.566214 kernel: ACPI: bus type PCI registered Jul 2 11:09:33.566219 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 11:09:33.566224 kernel: dca service started, version 1.12.1 Jul 2 11:09:33.566229 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jul 2 11:09:33.566234 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Jul 2 11:09:33.566239 kernel: PCI: Using configuration type 1 for base access Jul 2 11:09:33.566244 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jul 2 11:09:33.566249 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 11:09:33.566255 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 11:09:33.566260 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 11:09:33.566265 kernel: ACPI: Added _OSI(Module Device) Jul 2 11:09:33.566269 kernel: ACPI: Added _OSI(Processor Device) Jul 2 11:09:33.566274 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 11:09:33.566279 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 11:09:33.566284 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 11:09:33.566289 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 11:09:33.566294 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 11:09:33.566300 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jul 2 11:09:33.566305 kernel: ACPI: Dynamic OEM Table Load: Jul 2 11:09:33.566310 kernel: ACPI: SSDT 0xFFFF98E180220100 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jul 2 11:09:33.566315 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Jul 2 11:09:33.566320 kernel: ACPI: Dynamic OEM Table Load: Jul 2 11:09:33.566324 kernel: ACPI: SSDT 0xFFFF98E181AE8C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jul 2 11:09:33.566329 kernel: ACPI: Dynamic OEM Table Load: Jul 2 11:09:33.566334 kernel: ACPI: SSDT 0xFFFF98E181A63800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jul 2 11:09:33.566339 kernel: ACPI: Dynamic OEM Table Load: Jul 2 11:09:33.566345 kernel: ACPI: SSDT 0xFFFF98E181B53000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jul 2 11:09:33.566350 kernel: ACPI: Dynamic OEM Table Load: Jul 2 11:09:33.566355 kernel: ACPI: SSDT 0xFFFF98E180152000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jul 2 11:09:33.566360 kernel: ACPI: Dynamic OEM Table Load: Jul 2 11:09:33.566365 kernel: ACPI: SSDT 0xFFFF98E181AEE400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jul 2 11:09:33.566369 kernel: ACPI: Interpreter enabled Jul 2 11:09:33.566374 kernel: ACPI: PM: (supports S0 S5) Jul 2 11:09:33.566379 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 11:09:33.566384 kernel: HEST: Enabling Firmware First mode for corrected errors. Jul 2 11:09:33.566390 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jul 2 11:09:33.566395 kernel: HEST: Table parsing has been initialized. Jul 2 11:09:33.566400 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jul 2 11:09:33.566405 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 11:09:33.566410 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jul 2 11:09:33.566415 kernel: ACPI: PM: Power Resource [USBC] Jul 2 11:09:33.566419 kernel: ACPI: PM: Power Resource [V0PR] Jul 2 11:09:33.566424 kernel: ACPI: PM: Power Resource [V1PR] Jul 2 11:09:33.566429 kernel: ACPI: PM: Power Resource [V2PR] Jul 2 11:09:33.566434 kernel: ACPI: PM: Power Resource [WRST] Jul 2 11:09:33.566440 kernel: ACPI: PM: Power Resource [FN00] Jul 2 11:09:33.566445 kernel: ACPI: PM: Power Resource [FN01] Jul 2 11:09:33.566450 kernel: ACPI: PM: Power Resource [FN02] Jul 2 11:09:33.566454 kernel: ACPI: PM: Power Resource [FN03] Jul 2 11:09:33.566459 kernel: ACPI: PM: Power Resource [FN04] Jul 2 11:09:33.566464 kernel: ACPI: PM: Power Resource [PIN] Jul 2 11:09:33.566469 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jul 2 11:09:33.566553 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 11:09:33.566602 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jul 2 11:09:33.566644 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jul 2 11:09:33.566651 kernel: PCI host bridge to bus 0000:00 Jul 2 11:09:33.566695 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 11:09:33.566734 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 11:09:33.566772 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 11:09:33.566810 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jul 2 11:09:33.566849 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jul 2 11:09:33.566887 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jul 2 11:09:33.566938 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jul 2 11:09:33.566991 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jul 2 11:09:33.567036 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jul 2 11:09:33.567083 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Jul 2 11:09:33.567130 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Jul 2 11:09:33.567178 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jul 2 11:09:33.567222 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Jul 2 11:09:33.567269 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jul 2 11:09:33.567312 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Jul 2 11:09:33.567357 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jul 2 11:09:33.567405 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jul 2 11:09:33.567449 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Jul 2 11:09:33.567494 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Jul 2 11:09:33.567542 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jul 2 11:09:33.567585 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 2 11:09:33.567633 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jul 2 11:09:33.567678 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 2 11:09:33.567724 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jul 2 11:09:33.567767 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Jul 2 11:09:33.567809 kernel: pci 0000:00:16.0: PME# supported from D3hot Jul 2 11:09:33.567856 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jul 2 11:09:33.567898 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Jul 2 11:09:33.567942 kernel: pci 0000:00:16.1: PME# supported from D3hot Jul 2 11:09:33.567989 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jul 2 11:09:33.568032 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Jul 2 11:09:33.568074 kernel: pci 0000:00:16.4: PME# supported from D3hot Jul 2 11:09:33.568119 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Jul 2 11:09:33.568162 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Jul 2 11:09:33.568206 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Jul 2 11:09:33.568254 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Jul 2 11:09:33.568299 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Jul 2 11:09:33.568343 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Jul 2 11:09:33.568385 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Jul 2 11:09:33.568427 kernel: pci 0000:00:17.0: PME# supported from D3hot Jul 2 11:09:33.568475 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jul 2 11:09:33.568522 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jul 2 11:09:33.568570 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jul 2 11:09:33.568615 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jul 2 11:09:33.568664 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jul 2 11:09:33.568708 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jul 2 11:09:33.568755 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jul 2 11:09:33.568798 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jul 2 11:09:33.568847 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Jul 2 11:09:33.568891 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jul 2 11:09:33.568939 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jul 2 11:09:33.568982 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 2 11:09:33.569031 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jul 2 11:09:33.569078 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jul 2 11:09:33.569122 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Jul 2 11:09:33.569163 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jul 2 11:09:33.569213 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jul 2 11:09:33.569255 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jul 2 11:09:33.569307 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Jul 2 11:09:33.569353 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jul 2 11:09:33.569397 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Jul 2 11:09:33.569442 kernel: pci 0000:01:00.0: PME# supported from D3cold Jul 2 11:09:33.569488 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jul 2 11:09:33.569533 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jul 2 11:09:33.569582 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Jul 2 11:09:33.569629 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jul 2 11:09:33.569674 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Jul 2 11:09:33.569738 kernel: pci 0000:01:00.1: PME# supported from D3cold Jul 2 11:09:33.569781 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jul 2 11:09:33.569826 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jul 2 11:09:33.569869 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 2 11:09:33.569912 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jul 2 11:09:33.569956 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 2 11:09:33.569999 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 2 11:09:33.570049 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jul 2 11:09:33.570094 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Jul 2 11:09:33.570138 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Jul 2 11:09:33.570182 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Jul 2 11:09:33.570226 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Jul 2 11:09:33.570270 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 2 11:09:33.570373 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 2 11:09:33.570416 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 2 11:09:33.570461 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 2 11:09:33.570531 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jul 2 11:09:33.570577 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jul 2 11:09:33.570620 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Jul 2 11:09:33.570665 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Jul 2 11:09:33.570710 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Jul 2 11:09:33.570754 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jul 2 11:09:33.570798 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 2 11:09:33.570841 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 2 11:09:33.570883 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 2 11:09:33.570925 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 2 11:09:33.570973 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Jul 2 11:09:33.571018 kernel: pci 0000:06:00.0: enabling Extended Tags Jul 2 11:09:33.571063 kernel: pci 0000:06:00.0: supports D1 D2 Jul 2 11:09:33.571106 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 11:09:33.571149 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 2 11:09:33.571191 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jul 2 11:09:33.571233 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jul 2 11:09:33.571282 kernel: pci_bus 0000:07: extended config space not accessible Jul 2 11:09:33.571334 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Jul 2 11:09:33.571383 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Jul 2 11:09:33.571429 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Jul 2 11:09:33.571474 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Jul 2 11:09:33.571561 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 11:09:33.571607 kernel: pci 0000:07:00.0: supports D1 D2 Jul 2 11:09:33.571655 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 11:09:33.571698 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 2 11:09:33.571744 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jul 2 11:09:33.571788 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 2 11:09:33.571796 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jul 2 11:09:33.571801 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jul 2 11:09:33.571807 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jul 2 11:09:33.571812 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jul 2 11:09:33.571817 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jul 2 11:09:33.571822 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jul 2 11:09:33.571828 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jul 2 11:09:33.571834 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jul 2 11:09:33.571840 kernel: iommu: Default domain type: Translated Jul 2 11:09:33.571845 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 11:09:33.571889 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jul 2 11:09:33.571936 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 11:09:33.571981 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jul 2 11:09:33.571990 kernel: vgaarb: loaded Jul 2 11:09:33.571995 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 11:09:33.572001 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 11:09:33.572007 kernel: PTP clock support registered Jul 2 11:09:33.572012 kernel: PCI: Using ACPI for IRQ routing Jul 2 11:09:33.572017 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 11:09:33.572022 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jul 2 11:09:33.572028 kernel: e820: reserve RAM buffer [mem 0x80e54000-0x83ffffff] Jul 2 11:09:33.572033 kernel: e820: reserve RAM buffer [mem 0x8afcb000-0x8bffffff] Jul 2 11:09:33.572038 kernel: e820: reserve RAM buffer [mem 0x8c239000-0x8fffffff] Jul 2 11:09:33.572043 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jul 2 11:09:33.572049 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jul 2 11:09:33.572054 kernel: clocksource: Switched to clocksource tsc-early Jul 2 11:09:33.572059 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 11:09:33.572064 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 11:09:33.572070 kernel: pnp: PnP ACPI init Jul 2 11:09:33.572114 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jul 2 11:09:33.572156 kernel: pnp 00:02: [dma 0 disabled] Jul 2 11:09:33.572199 kernel: pnp 00:03: [dma 0 disabled] Jul 2 11:09:33.572244 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jul 2 11:09:33.572284 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jul 2 11:09:33.572326 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jul 2 11:09:33.572368 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jul 2 11:09:33.572406 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jul 2 11:09:33.572445 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jul 2 11:09:33.572509 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jul 2 11:09:33.572562 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jul 2 11:09:33.572599 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jul 2 11:09:33.572637 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jul 2 11:09:33.572676 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jul 2 11:09:33.572717 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jul 2 11:09:33.572755 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jul 2 11:09:33.572796 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jul 2 11:09:33.572833 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jul 2 11:09:33.572872 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jul 2 11:09:33.572909 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jul 2 11:09:33.572948 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jul 2 11:09:33.572989 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jul 2 11:09:33.572997 kernel: pnp: PnP ACPI: found 10 devices Jul 2 11:09:33.573003 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 11:09:33.573009 kernel: NET: Registered PF_INET protocol family Jul 2 11:09:33.573014 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 11:09:33.573020 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 2 11:09:33.573025 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 11:09:33.573030 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 11:09:33.573035 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 2 11:09:33.573041 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jul 2 11:09:33.573046 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 11:09:33.573052 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 11:09:33.573057 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 11:09:33.573064 kernel: NET: Registered PF_XDP protocol family Jul 2 11:09:33.573106 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Jul 2 11:09:33.573149 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Jul 2 11:09:33.573191 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Jul 2 11:09:33.573236 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jul 2 11:09:33.573280 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jul 2 11:09:33.573327 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jul 2 11:09:33.573371 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jul 2 11:09:33.573415 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 2 11:09:33.573457 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jul 2 11:09:33.573527 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 2 11:09:33.573571 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 2 11:09:33.573617 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 2 11:09:33.573661 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 2 11:09:33.573705 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 2 11:09:33.573750 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 2 11:09:33.573794 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 2 11:09:33.573837 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 2 11:09:33.573880 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 2 11:09:33.573928 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 2 11:09:33.573972 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jul 2 11:09:33.574017 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 2 11:09:33.574061 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 2 11:09:33.574104 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jul 2 11:09:33.574147 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jul 2 11:09:33.574187 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 2 11:09:33.574226 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 11:09:33.574263 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 11:09:33.574302 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 11:09:33.574341 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jul 2 11:09:33.574378 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jul 2 11:09:33.574423 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jul 2 11:09:33.574464 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jul 2 11:09:33.574514 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jul 2 11:09:33.574557 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jul 2 11:09:33.574601 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jul 2 11:09:33.574641 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jul 2 11:09:33.574685 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jul 2 11:09:33.574745 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jul 2 11:09:33.574786 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jul 2 11:09:33.574829 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jul 2 11:09:33.574837 kernel: PCI: CLS 64 bytes, default 64 Jul 2 11:09:33.574843 kernel: DMAR: No ATSR found Jul 2 11:09:33.574848 kernel: DMAR: No SATC found Jul 2 11:09:33.574854 kernel: DMAR: dmar0: Using Queued invalidation Jul 2 11:09:33.574897 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jul 2 11:09:33.574941 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jul 2 11:09:33.574983 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jul 2 11:09:33.575026 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jul 2 11:09:33.575071 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jul 2 11:09:33.575114 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jul 2 11:09:33.575156 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jul 2 11:09:33.575198 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jul 2 11:09:33.575240 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jul 2 11:09:33.575281 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jul 2 11:09:33.575322 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jul 2 11:09:33.575364 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jul 2 11:09:33.575406 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jul 2 11:09:33.575451 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jul 2 11:09:33.575517 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jul 2 11:09:33.575581 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jul 2 11:09:33.575623 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jul 2 11:09:33.575666 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jul 2 11:09:33.575708 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jul 2 11:09:33.575750 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jul 2 11:09:33.575793 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jul 2 11:09:33.575838 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jul 2 11:09:33.575883 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jul 2 11:09:33.575927 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jul 2 11:09:33.575972 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jul 2 11:09:33.576015 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jul 2 11:09:33.576062 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jul 2 11:09:33.576070 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jul 2 11:09:33.576075 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 11:09:33.576082 kernel: software IO TLB: mapped [mem 0x0000000086fcb000-0x000000008afcb000] (64MB) Jul 2 11:09:33.576088 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jul 2 11:09:33.576093 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jul 2 11:09:33.576098 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jul 2 11:09:33.576103 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jul 2 11:09:33.576149 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jul 2 11:09:33.576157 kernel: Initialise system trusted keyrings Jul 2 11:09:33.576163 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jul 2 11:09:33.576169 kernel: Key type asymmetric registered Jul 2 11:09:33.576175 kernel: Asymmetric key parser 'x509' registered Jul 2 11:09:33.576180 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 11:09:33.576185 kernel: io scheduler mq-deadline registered Jul 2 11:09:33.576190 kernel: io scheduler kyber registered Jul 2 11:09:33.576196 kernel: io scheduler bfq registered Jul 2 11:09:33.576240 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jul 2 11:09:33.576283 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jul 2 11:09:33.576325 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jul 2 11:09:33.576371 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jul 2 11:09:33.576413 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jul 2 11:09:33.576456 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jul 2 11:09:33.576549 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jul 2 11:09:33.576556 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jul 2 11:09:33.576562 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jul 2 11:09:33.576567 kernel: pstore: Registered erst as persistent store backend Jul 2 11:09:33.576574 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 11:09:33.576580 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 11:09:33.576585 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 11:09:33.576590 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 2 11:09:33.576595 kernel: hpet_acpi_add: no address or irqs in _CRS Jul 2 11:09:33.576641 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jul 2 11:09:33.576649 kernel: i8042: PNP: No PS/2 controller found. Jul 2 11:09:33.576687 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jul 2 11:09:33.576729 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jul 2 11:09:33.576768 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-07-02T11:09:32 UTC (1719918572) Jul 2 11:09:33.576806 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jul 2 11:09:33.576813 kernel: fail to initialize ptp_kvm Jul 2 11:09:33.576818 kernel: intel_pstate: Intel P-state driver initializing Jul 2 11:09:33.576824 kernel: intel_pstate: Disabling energy efficiency optimization Jul 2 11:09:33.576829 kernel: intel_pstate: HWP enabled Jul 2 11:09:33.576834 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jul 2 11:09:33.576839 kernel: vesafb: scrolling: redraw Jul 2 11:09:33.576846 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jul 2 11:09:33.576851 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000b9169f6b, using 768k, total 768k Jul 2 11:09:33.576857 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 11:09:33.576862 kernel: fb0: VESA VGA frame buffer device Jul 2 11:09:33.576867 kernel: NET: Registered PF_INET6 protocol family Jul 2 11:09:33.576873 kernel: Segment Routing with IPv6 Jul 2 11:09:33.576878 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 11:09:33.576883 kernel: NET: Registered PF_PACKET protocol family Jul 2 11:09:33.576888 kernel: Key type dns_resolver registered Jul 2 11:09:33.576894 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Jul 2 11:09:33.576900 kernel: microcode: Microcode Update Driver: v2.2. Jul 2 11:09:33.576905 kernel: IPI shorthand broadcast: enabled Jul 2 11:09:33.576910 kernel: sched_clock: Marking stable (1735643391, 1339328721)->(4518586717, -1443614605) Jul 2 11:09:33.576915 kernel: registered taskstats version 1 Jul 2 11:09:33.576921 kernel: Loading compiled-in X.509 certificates Jul 2 11:09:33.576926 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 11:09:33.576931 kernel: Key type .fscrypt registered Jul 2 11:09:33.576936 kernel: Key type fscrypt-provisioning registered Jul 2 11:09:33.576943 kernel: pstore: Using crash dump compression: deflate Jul 2 11:09:33.576948 kernel: ima: Allocated hash algorithm: sha1 Jul 2 11:09:33.576953 kernel: ima: No architecture policies found Jul 2 11:09:33.576958 kernel: clk: Disabling unused clocks Jul 2 11:09:33.576964 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 11:09:33.576969 kernel: Write protecting the kernel read-only data: 28672k Jul 2 11:09:33.576974 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 11:09:33.576979 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 11:09:33.576985 kernel: Run /init as init process Jul 2 11:09:33.576991 kernel: with arguments: Jul 2 11:09:33.576997 kernel: /init Jul 2 11:09:33.577002 kernel: with environment: Jul 2 11:09:33.577007 kernel: HOME=/ Jul 2 11:09:33.577012 kernel: TERM=linux Jul 2 11:09:33.577017 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 11:09:33.577023 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 11:09:33.577030 systemd[1]: Detected architecture x86-64. Jul 2 11:09:33.577036 systemd[1]: Running in initrd. Jul 2 11:09:33.577042 systemd[1]: No hostname configured, using default hostname. Jul 2 11:09:33.577047 systemd[1]: Hostname set to . Jul 2 11:09:33.577053 systemd[1]: Initializing machine ID from random generator. Jul 2 11:09:33.577058 systemd[1]: Queued start job for default target initrd.target. Jul 2 11:09:33.577064 systemd[1]: Started systemd-ask-password-console.path. Jul 2 11:09:33.577069 systemd[1]: Reached target cryptsetup.target. Jul 2 11:09:33.577074 systemd[1]: Reached target paths.target. Jul 2 11:09:33.577080 systemd[1]: Reached target slices.target. Jul 2 11:09:33.577086 systemd[1]: Reached target swap.target. Jul 2 11:09:33.577091 systemd[1]: Reached target timers.target. Jul 2 11:09:33.577096 systemd[1]: Listening on iscsid.socket. Jul 2 11:09:33.577102 systemd[1]: Listening on iscsiuio.socket. Jul 2 11:09:33.577108 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 11:09:33.577113 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 11:09:33.577119 systemd[1]: Listening on systemd-journald.socket. Jul 2 11:09:33.577125 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Jul 2 11:09:33.577130 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Jul 2 11:09:33.577135 kernel: clocksource: Switched to clocksource tsc Jul 2 11:09:33.577141 systemd[1]: Listening on systemd-networkd.socket. Jul 2 11:09:33.577146 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 11:09:33.577152 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 11:09:33.577157 systemd[1]: Reached target sockets.target. Jul 2 11:09:33.577163 systemd[1]: Starting kmod-static-nodes.service... Jul 2 11:09:33.577169 systemd[1]: Finished network-cleanup.service. Jul 2 11:09:33.577174 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 11:09:33.577180 systemd[1]: Starting systemd-journald.service... Jul 2 11:09:33.577185 systemd[1]: Starting systemd-modules-load.service... Jul 2 11:09:33.577193 systemd-journald[268]: Journal started Jul 2 11:09:33.577219 systemd-journald[268]: Runtime Journal (/run/log/journal/ff940f87e3b34644aa4f1f42561d8851) is 8.0M, max 640.0M, 632.0M free. Jul 2 11:09:33.578590 systemd-modules-load[269]: Inserted module 'overlay' Jul 2 11:09:33.584000 audit: BPF prog-id=6 op=LOAD Jul 2 11:09:33.602533 kernel: audit: type=1334 audit(1719918573.584:2): prog-id=6 op=LOAD Jul 2 11:09:33.602551 systemd[1]: Starting systemd-resolved.service... Jul 2 11:09:33.651529 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 11:09:33.668525 kernel: Bridge firewalling registered Jul 2 11:09:33.668541 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 11:09:33.683062 systemd-modules-load[269]: Inserted module 'br_netfilter' Jul 2 11:09:33.702575 systemd[1]: Started systemd-journald.service. Jul 2 11:09:33.685540 systemd-resolved[271]: Positive Trust Anchors: Jul 2 11:09:33.762821 kernel: SCSI subsystem initialized Jul 2 11:09:33.762832 kernel: audit: type=1130 audit(1719918573.714:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:33.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:33.685547 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 11:09:33.891694 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 11:09:33.891710 kernel: audit: type=1130 audit(1719918573.791:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:33.891718 kernel: device-mapper: uevent: version 1.0.3 Jul 2 11:09:33.891724 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 11:09:33.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:33.685566 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 11:09:33.965742 kernel: audit: type=1130 audit(1719918573.900:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:33.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:33.687088 systemd-resolved[271]: Defaulting to hostname 'linux'. Jul 2 11:09:33.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:33.714707 systemd[1]: Started systemd-resolved.service. Jul 2 11:09:34.070883 kernel: audit: type=1130 audit(1719918573.974:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:34.070895 kernel: audit: type=1130 audit(1719918574.026:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:34.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:33.791655 systemd[1]: Finished kmod-static-nodes.service. Jul 2 11:09:34.125688 kernel: audit: type=1130 audit(1719918574.079:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:34.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:33.881435 systemd-modules-load[269]: Inserted module 'dm_multipath' Jul 2 11:09:33.900888 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 11:09:33.974824 systemd[1]: Finished systemd-modules-load.service. Jul 2 11:09:34.026817 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 11:09:34.079790 systemd[1]: Reached target nss-lookup.target. Jul 2 11:09:34.135093 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 11:09:34.144345 systemd[1]: Starting systemd-sysctl.service... Jul 2 11:09:34.144741 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 11:09:34.147720 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 11:09:34.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:34.148343 systemd[1]: Finished systemd-sysctl.service. Jul 2 11:09:34.196698 kernel: audit: type=1130 audit(1719918574.147:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:34.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:34.208821 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 11:09:34.273605 kernel: audit: type=1130 audit(1719918574.208:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:34.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:34.266096 systemd[1]: Starting dracut-cmdline.service... Jul 2 11:09:34.287584 dracut-cmdline[293]: dracut-dracut-053 Jul 2 11:09:34.287584 dracut-cmdline[293]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Jul 2 11:09:34.287584 dracut-cmdline[293]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 11:09:34.355550 kernel: Loading iSCSI transport class v2.0-870. Jul 2 11:09:34.355562 kernel: iscsi: registered transport (tcp) Jul 2 11:09:34.395530 kernel: iscsi: registered transport (qla4xxx) Jul 2 11:09:34.395549 kernel: QLogic iSCSI HBA Driver Jul 2 11:09:34.429352 systemd[1]: Finished dracut-cmdline.service. Jul 2 11:09:34.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:34.438295 systemd[1]: Starting dracut-pre-udev.service... Jul 2 11:09:34.493514 kernel: raid6: avx2x4 gen() 48111 MB/s Jul 2 11:09:34.528557 kernel: raid6: avx2x4 xor() 14125 MB/s Jul 2 11:09:34.563511 kernel: raid6: avx2x2 gen() 51379 MB/s Jul 2 11:09:34.598513 kernel: raid6: avx2x2 xor() 31302 MB/s Jul 2 11:09:34.632539 kernel: raid6: avx2x1 gen() 44446 MB/s Jul 2 11:09:34.666546 kernel: raid6: avx2x1 xor() 27857 MB/s Jul 2 11:09:34.700551 kernel: raid6: sse2x4 gen() 21373 MB/s Jul 2 11:09:34.734545 kernel: raid6: sse2x4 xor() 11892 MB/s Jul 2 11:09:34.768516 kernel: raid6: sse2x2 gen() 21654 MB/s Jul 2 11:09:34.802546 kernel: raid6: sse2x2 xor() 13390 MB/s Jul 2 11:09:34.836546 kernel: raid6: sse2x1 gen() 18297 MB/s Jul 2 11:09:34.887909 kernel: raid6: sse2x1 xor() 8940 MB/s Jul 2 11:09:34.887925 kernel: raid6: using algorithm avx2x2 gen() 51379 MB/s Jul 2 11:09:34.887933 kernel: raid6: .... xor() 31302 MB/s, rmw enabled Jul 2 11:09:34.905875 kernel: raid6: using avx2x2 recovery algorithm Jul 2 11:09:34.951540 kernel: xor: automatically using best checksumming function avx Jul 2 11:09:35.030514 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 11:09:35.035819 systemd[1]: Finished dracut-pre-udev.service. Jul 2 11:09:35.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:35.045000 audit: BPF prog-id=7 op=LOAD Jul 2 11:09:35.045000 audit: BPF prog-id=8 op=LOAD Jul 2 11:09:35.046479 systemd[1]: Starting systemd-udevd.service... Jul 2 11:09:35.054740 systemd-udevd[474]: Using default interface naming scheme 'v252'. Jul 2 11:09:35.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:35.060718 systemd[1]: Started systemd-udevd.service. Jul 2 11:09:35.097605 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Jul 2 11:09:35.074209 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 11:09:35.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:35.103356 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 11:09:35.114749 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 11:09:35.167273 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 11:09:35.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:35.194489 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 11:09:35.196489 kernel: libata version 3.00 loaded. Jul 2 11:09:35.231070 kernel: ACPI: bus type USB registered Jul 2 11:09:35.231117 kernel: usbcore: registered new interface driver usbfs Jul 2 11:09:35.231131 kernel: usbcore: registered new interface driver hub Jul 2 11:09:35.266134 kernel: usbcore: registered new device driver usb Jul 2 11:09:35.266483 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 11:09:35.299905 kernel: AES CTR mode by8 optimization enabled Jul 2 11:09:35.300485 kernel: ahci 0000:00:17.0: version 3.0 Jul 2 11:09:35.318134 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jul 2 11:09:35.318174 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Jul 2 11:09:35.318260 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jul 2 11:09:35.376657 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jul 2 11:09:35.376740 kernel: pps pps0: new PPS source ptp0 Jul 2 11:09:35.392485 kernel: igb 0000:03:00.0: added PHC on eth0 Jul 2 11:09:35.408482 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Jul 2 11:09:35.408570 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 2 11:09:35.408642 kernel: scsi host0: ahci Jul 2 11:09:35.408711 kernel: scsi host1: ahci Jul 2 11:09:35.408774 kernel: scsi host2: ahci Jul 2 11:09:35.410481 kernel: scsi host3: ahci Jul 2 11:09:35.410566 kernel: scsi host4: ahci Jul 2 11:09:35.410660 kernel: scsi host5: ahci Jul 2 11:09:35.411482 kernel: scsi host6: ahci Jul 2 11:09:35.411555 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Jul 2 11:09:35.411564 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Jul 2 11:09:35.411571 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Jul 2 11:09:35.411577 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Jul 2 11:09:35.411583 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Jul 2 11:09:35.411590 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Jul 2 11:09:35.411596 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Jul 2 11:09:35.441820 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 2 11:09:35.441894 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:bc Jul 2 11:09:35.661010 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jul 2 11:09:35.661086 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 2 11:09:35.725343 kernel: pps pps1: new PPS source ptp1 Jul 2 11:09:35.725420 kernel: igb 0000:04:00.0: added PHC on eth1 Jul 2 11:09:35.725484 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 2 11:09:35.725493 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 2 11:09:35.731482 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jul 2 11:09:35.737483 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 2 11:09:35.748483 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Jul 2 11:09:35.767330 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:bd Jul 2 11:09:35.767404 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 2 11:09:35.796031 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jul 2 11:09:35.796108 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 2 11:09:35.811543 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 2 11:09:35.868534 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 2 11:09:35.883482 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jul 2 11:09:35.896483 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jul 2 11:09:35.912541 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 2 11:09:35.926559 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jul 2 11:09:35.972044 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 2 11:09:35.972060 kernel: ata2.00: Features: NCQ-prio Jul 2 11:09:36.007339 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 2 11:09:36.007356 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Jul 2 11:09:36.007423 kernel: ata1.00: Features: NCQ-prio Jul 2 11:09:36.020530 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Jul 2 11:09:36.053319 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 2 11:09:36.068531 kernel: ata2.00: configured for UDMA/133 Jul 2 11:09:36.083547 kernel: ata1.00: configured for UDMA/133 Jul 2 11:09:36.083563 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jul 2 11:09:36.118488 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jul 2 11:09:36.147659 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 2 11:09:36.147865 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jul 2 11:09:36.148012 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jul 2 11:09:36.198422 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jul 2 11:09:36.198601 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 2 11:09:36.198739 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jul 2 11:09:36.232014 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jul 2 11:09:36.245693 kernel: hub 1-0:1.0: USB hub found Jul 2 11:09:36.245906 kernel: hub 1-0:1.0: 16 ports detected Jul 2 11:09:36.287359 kernel: hub 2-0:1.0: USB hub found Jul 2 11:09:36.287506 kernel: hub 2-0:1.0: 10 ports detected Jul 2 11:09:36.302245 kernel: ata2.00: Enabling discard_zeroes_data Jul 2 11:09:36.302263 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 11:09:36.337543 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 2 11:09:36.337623 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 2 11:09:36.337690 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jul 2 11:09:36.337748 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jul 2 11:09:36.337803 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jul 2 11:09:36.352277 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 11:09:36.352354 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 11:09:36.354482 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jul 2 11:09:36.359546 kernel: port_module: 9 callbacks suppressed Jul 2 11:09:36.359563 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jul 2 11:09:36.367547 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jul 2 11:09:36.382115 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jul 2 11:09:36.397492 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 2 11:09:36.426269 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 2 11:09:36.431487 kernel: ata2.00: Enabling discard_zeroes_data Jul 2 11:09:36.431504 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Jul 2 11:09:36.514766 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jul 2 11:09:36.529489 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 11:09:36.592143 kernel: ata2.00: Enabling discard_zeroes_data Jul 2 11:09:36.592160 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jul 2 11:09:36.606562 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 11:09:36.635999 kernel: GPT:9289727 != 937703087 Jul 2 11:09:36.636015 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 11:09:36.651427 kernel: GPT:9289727 != 937703087 Jul 2 11:09:36.657533 kernel: hub 1-14:1.0: USB hub found Jul 2 11:09:36.657619 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Jul 2 11:09:36.664236 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 11:09:36.664254 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 11:09:36.695980 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 11:09:36.695996 kernel: hub 1-14:1.0: 4 ports detected Jul 2 11:09:36.696067 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 11:09:36.783483 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Jul 2 11:09:36.800776 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 11:09:36.839541 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Jul 2 11:09:36.839623 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (519) Jul 2 11:09:36.827842 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 11:09:36.849600 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 11:09:36.871840 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 11:09:36.896359 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 11:09:36.911364 systemd[1]: Starting disk-uuid.service... Jul 2 11:09:36.952630 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 11:09:36.952686 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 11:09:36.952845 disk-uuid[691]: Primary Header is updated. Jul 2 11:09:36.952845 disk-uuid[691]: Secondary Entries is updated. Jul 2 11:09:36.952845 disk-uuid[691]: Secondary Header is updated. Jul 2 11:09:37.002583 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 11:09:37.002597 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 11:09:37.978557 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 11:09:37.995038 disk-uuid[692]: The operation has completed successfully. Jul 2 11:09:38.003723 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 11:09:38.027542 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jul 2 11:09:38.034375 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 11:09:38.120409 kernel: audit: type=1130 audit(1719918578.041:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:38.120422 kernel: audit: type=1131 audit(1719918578.041:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:38.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:38.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:38.034419 systemd[1]: Finished disk-uuid.service. Jul 2 11:09:38.148521 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 11:09:38.044404 systemd[1]: Starting verity-setup.service... Jul 2 11:09:38.171485 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 11:09:38.180894 systemd[1]: Found device dev-mapper-usr.device. Jul 2 11:09:38.248538 kernel: usbcore: registered new interface driver usbhid Jul 2 11:09:38.248556 kernel: usbhid: USB HID core driver Jul 2 11:09:38.248564 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jul 2 11:09:38.228246 systemd[1]: Mounting sysusr-usr.mount... Jul 2 11:09:38.255692 systemd[1]: Finished verity-setup.service. Jul 2 11:09:38.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:38.314532 kernel: audit: type=1130 audit(1719918578.270:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:38.314557 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jul 2 11:09:38.426113 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jul 2 11:09:38.426130 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jul 2 11:09:38.455278 systemd[1]: Mounted sysusr-usr.mount. Jul 2 11:09:38.470716 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 11:09:38.463838 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 11:09:38.464232 systemd[1]: Starting ignition-setup.service... Jul 2 11:09:38.553727 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 11:09:38.553745 kernel: BTRFS info (device sda6): using free space tree Jul 2 11:09:38.553753 kernel: BTRFS info (device sda6): has skinny extents Jul 2 11:09:38.553760 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 11:09:38.496934 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 11:09:38.561900 systemd[1]: Finished ignition-setup.service. Jul 2 11:09:38.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:38.627499 kernel: audit: type=1130 audit(1719918578.578:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:38.578837 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 11:09:38.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:38.636148 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 11:09:38.715278 kernel: audit: type=1130 audit(1719918578.635:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:38.715294 kernel: audit: type=1334 audit(1719918578.692:24): prog-id=9 op=LOAD Jul 2 11:09:38.692000 audit: BPF prog-id=9 op=LOAD Jul 2 11:09:38.693368 systemd[1]: Starting systemd-networkd.service... Jul 2 11:09:38.729810 systemd-networkd[879]: lo: Link UP Jul 2 11:09:38.729813 systemd-networkd[879]: lo: Gained carrier Jul 2 11:09:38.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:38.757717 ignition[866]: Ignition 2.14.0 Jul 2 11:09:38.810731 kernel: audit: type=1130 audit(1719918578.744:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:38.730104 systemd-networkd[879]: Enumeration completed Jul 2 11:09:38.757721 ignition[866]: Stage: fetch-offline Jul 2 11:09:38.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:38.730173 systemd[1]: Started systemd-networkd.service. Jul 2 11:09:38.965146 kernel: audit: type=1130 audit(1719918578.831:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:38.965159 kernel: audit: type=1130 audit(1719918578.891:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:38.965169 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 2 11:09:38.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:38.757746 ignition[866]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 11:09:38.990314 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Jul 2 11:09:38.730761 systemd-networkd[879]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 11:09:38.757760 ignition[866]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 11:09:38.744604 systemd[1]: Reached target network.target. Jul 2 11:09:39.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:38.760594 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 11:09:39.048643 iscsid[899]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 11:09:39.048643 iscsid[899]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 11:09:39.048643 iscsid[899]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 11:09:39.048643 iscsid[899]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 11:09:39.048643 iscsid[899]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 11:09:39.048643 iscsid[899]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 11:09:39.048643 iscsid[899]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 11:09:39.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:39.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:38.771235 unknown[866]: fetched base config from "system" Jul 2 11:09:38.760658 ignition[866]: parsed url from cmdline: "" Jul 2 11:09:38.771239 unknown[866]: fetched user config from "system" Jul 2 11:09:38.760661 ignition[866]: no config URL provided Jul 2 11:09:38.805186 systemd[1]: Starting iscsiuio.service... Jul 2 11:09:38.760663 ignition[866]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 11:09:38.817760 systemd[1]: Started iscsiuio.service. Jul 2 11:09:38.760685 ignition[866]: parsing config with SHA512: f160dcbf506083487b012ac1ccfc69f78f56a6e859d9c30ffd774fc3958735c4c3b1c53e3ec43da2c666c8ce1d98421fdfd7b29bf533ce0bd5ed47859d6b6b5a Jul 2 11:09:38.831797 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 11:09:38.771577 ignition[866]: fetch-offline: fetch-offline passed Jul 2 11:09:39.284604 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jul 2 11:09:38.891731 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 11:09:38.771579 ignition[866]: POST message to Packet Timeline Jul 2 11:09:38.892174 systemd[1]: Starting ignition-kargs.service... Jul 2 11:09:38.771584 ignition[866]: POST Status error: resource requires networking Jul 2 11:09:38.967235 systemd-networkd[879]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 11:09:38.771619 ignition[866]: Ignition finished successfully Jul 2 11:09:38.979059 systemd[1]: Starting iscsid.service... Jul 2 11:09:38.969663 ignition[889]: Ignition 2.14.0 Jul 2 11:09:39.004820 systemd[1]: Started iscsid.service. Jul 2 11:09:38.969667 ignition[889]: Stage: kargs Jul 2 11:09:39.025025 systemd[1]: Starting dracut-initqueue.service... Jul 2 11:09:38.969721 ignition[889]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 11:09:39.038730 systemd[1]: Finished dracut-initqueue.service. Jul 2 11:09:38.969730 ignition[889]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 11:09:39.057663 systemd[1]: Reached target remote-fs-pre.target. Jul 2 11:09:38.971035 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 11:09:39.076558 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 11:09:38.972416 ignition[889]: kargs: kargs passed Jul 2 11:09:39.076668 systemd[1]: Reached target remote-fs.target. Jul 2 11:09:38.972419 ignition[889]: POST message to Packet Timeline Jul 2 11:09:39.110141 systemd[1]: Starting dracut-pre-mount.service... Jul 2 11:09:38.972431 ignition[889]: GET https://metadata.packet.net/metadata: attempt #1 Jul 2 11:09:39.149867 systemd[1]: Finished dracut-pre-mount.service. Jul 2 11:09:38.976696 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41958->[::1]:53: read: connection refused Jul 2 11:09:39.278382 systemd-networkd[879]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 11:09:39.177220 ignition[889]: GET https://metadata.packet.net/metadata: attempt #2 Jul 2 11:09:39.306771 systemd-networkd[879]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 11:09:39.178168 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55874->[::1]:53: read: connection refused Jul 2 11:09:39.336936 systemd-networkd[879]: enp1s0f1np1: Link UP Jul 2 11:09:39.337123 systemd-networkd[879]: enp1s0f1np1: Gained carrier Jul 2 11:09:39.347776 systemd-networkd[879]: enp1s0f0np0: Link UP Jul 2 11:09:39.347991 systemd-networkd[879]: eno2: Link UP Jul 2 11:09:39.348163 systemd-networkd[879]: eno1: Link UP Jul 2 11:09:39.579219 ignition[889]: GET https://metadata.packet.net/metadata: attempt #3 Jul 2 11:09:39.580517 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59643->[::1]:53: read: connection refused Jul 2 11:09:39.990253 systemd-networkd[879]: enp1s0f0np0: Gained carrier Jul 2 11:09:39.998715 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Jul 2 11:09:40.036792 systemd-networkd[879]: enp1s0f0np0: DHCPv4 address 145.40.90.137/31, gateway 145.40.90.136 acquired from 145.40.83.140 Jul 2 11:09:40.381066 ignition[889]: GET https://metadata.packet.net/metadata: attempt #4 Jul 2 11:09:40.382310 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46385->[::1]:53: read: connection refused Jul 2 11:09:40.671074 systemd-networkd[879]: enp1s0f1np1: Gained IPv6LL Jul 2 11:09:41.951065 systemd-networkd[879]: enp1s0f0np0: Gained IPv6LL Jul 2 11:09:41.983858 ignition[889]: GET https://metadata.packet.net/metadata: attempt #5 Jul 2 11:09:41.985164 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51708->[::1]:53: read: connection refused Jul 2 11:09:45.188527 ignition[889]: GET https://metadata.packet.net/metadata: attempt #6 Jul 2 11:09:45.227826 ignition[889]: GET result: OK Jul 2 11:09:45.552006 ignition[889]: Ignition finished successfully Jul 2 11:09:45.556353 systemd[1]: Finished ignition-kargs.service. Jul 2 11:09:45.646552 kernel: kauditd_printk_skb: 3 callbacks suppressed Jul 2 11:09:45.646586 kernel: audit: type=1130 audit(1719918585.568:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:45.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:45.577579 ignition[916]: Ignition 2.14.0 Jul 2 11:09:45.570755 systemd[1]: Starting ignition-disks.service... Jul 2 11:09:45.577583 ignition[916]: Stage: disks Jul 2 11:09:45.577638 ignition[916]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 11:09:45.577650 ignition[916]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 11:09:45.579490 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 11:09:45.580049 ignition[916]: disks: disks passed Jul 2 11:09:45.580051 ignition[916]: POST message to Packet Timeline Jul 2 11:09:45.580062 ignition[916]: GET https://metadata.packet.net/metadata: attempt #1 Jul 2 11:09:45.603601 ignition[916]: GET result: OK Jul 2 11:09:45.792875 ignition[916]: Ignition finished successfully Jul 2 11:09:45.795878 systemd[1]: Finished ignition-disks.service. Jul 2 11:09:45.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:45.808079 systemd[1]: Reached target initrd-root-device.target. Jul 2 11:09:45.897746 kernel: audit: type=1130 audit(1719918585.807:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:45.882677 systemd[1]: Reached target local-fs-pre.target. Jul 2 11:09:45.882715 systemd[1]: Reached target local-fs.target. Jul 2 11:09:45.906697 systemd[1]: Reached target sysinit.target. Jul 2 11:09:45.920697 systemd[1]: Reached target basic.target. Jul 2 11:09:45.934488 systemd[1]: Starting systemd-fsck-root.service... Jul 2 11:09:45.957682 systemd-fsck[932]: ROOT: clean, 614/553520 files, 56020/553472 blocks Jul 2 11:09:45.969170 systemd[1]: Finished systemd-fsck-root.service. Jul 2 11:09:46.063630 kernel: audit: type=1130 audit(1719918585.977:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:46.063645 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 11:09:45.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:45.985716 systemd[1]: Mounting sysroot.mount... Jul 2 11:09:46.072227 systemd[1]: Mounted sysroot.mount. Jul 2 11:09:46.085830 systemd[1]: Reached target initrd-root-fs.target. Jul 2 11:09:46.093529 systemd[1]: Mounting sysroot-usr.mount... Jul 2 11:09:46.118351 systemd[1]: Starting flatcar-metadata-hostname.service... Jul 2 11:09:46.127054 systemd[1]: Starting flatcar-static-network.service... Jul 2 11:09:46.142738 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 11:09:46.142904 systemd[1]: Reached target ignition-diskful.target. Jul 2 11:09:46.162863 systemd[1]: Mounted sysroot-usr.mount. Jul 2 11:09:46.185921 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 11:09:46.198347 systemd[1]: Starting initrd-setup-root.service... Jul 2 11:09:46.335340 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (943) Jul 2 11:09:46.335358 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 11:09:46.335366 kernel: BTRFS info (device sda6): using free space tree Jul 2 11:09:46.335373 kernel: BTRFS info (device sda6): has skinny extents Jul 2 11:09:46.335380 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 11:09:46.335388 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 11:09:46.409730 kernel: audit: type=1130 audit(1719918586.344:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:46.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:46.409778 coreos-metadata[939]: Jul 02 11:09:46.313 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 2 11:09:46.409778 coreos-metadata[939]: Jul 02 11:09:46.336 INFO Fetch successful Jul 2 11:09:46.409778 coreos-metadata[939]: Jul 02 11:09:46.358 INFO wrote hostname ci-3510.3.5-a-3a013adf74 to /sysroot/etc/hostname Jul 2 11:09:46.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:46.504337 coreos-metadata[940]: Jul 02 11:09:46.314 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 2 11:09:46.504337 coreos-metadata[940]: Jul 02 11:09:46.337 INFO Fetch successful Jul 2 11:09:46.643688 kernel: audit: type=1130 audit(1719918586.447:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:46.643701 kernel: audit: type=1130 audit(1719918586.512:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:46.643710 kernel: audit: type=1131 audit(1719918586.512:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:46.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:46.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:46.299183 systemd[1]: Finished initrd-setup-root.service. Jul 2 11:09:46.657629 initrd-setup-root[972]: cut: /sysroot/etc/group: No such file or directory Jul 2 11:09:46.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:46.345616 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 11:09:46.739715 kernel: audit: type=1130 audit(1719918586.665:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:46.739731 initrd-setup-root[982]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 11:09:46.418814 systemd[1]: Finished flatcar-metadata-hostname.service. Jul 2 11:09:46.758697 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 11:09:46.447763 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jul 2 11:09:46.778679 ignition[1015]: INFO : Ignition 2.14.0 Jul 2 11:09:46.778679 ignition[1015]: INFO : Stage: mount Jul 2 11:09:46.778679 ignition[1015]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 11:09:46.778679 ignition[1015]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 11:09:46.778679 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 11:09:46.778679 ignition[1015]: INFO : mount: mount passed Jul 2 11:09:46.778679 ignition[1015]: INFO : POST message to Packet Timeline Jul 2 11:09:46.778679 ignition[1015]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 2 11:09:46.778679 ignition[1015]: INFO : GET result: OK Jul 2 11:09:46.447811 systemd[1]: Finished flatcar-static-network.service. Jul 2 11:09:46.513101 systemd[1]: Starting ignition-mount.service... Jul 2 11:09:46.636076 systemd[1]: Starting sysroot-boot.service... Jul 2 11:09:46.650910 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 11:09:46.650954 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 11:09:46.654430 systemd[1]: Finished sysroot-boot.service. Jul 2 11:09:46.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:46.979428 ignition[1015]: INFO : Ignition finished successfully Jul 2 11:09:46.994722 kernel: audit: type=1130 audit(1719918586.920:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:46.910057 systemd[1]: Finished ignition-mount.service. Jul 2 11:09:46.922638 systemd[1]: Starting ignition-files.service... Jul 2 11:09:47.090560 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1032) Jul 2 11:09:47.090572 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 11:09:47.090579 kernel: BTRFS info (device sda6): using free space tree Jul 2 11:09:47.090586 kernel: BTRFS info (device sda6): has skinny extents Jul 2 11:09:47.090593 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 11:09:46.988429 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 11:09:47.122969 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 11:09:47.146632 ignition[1051]: INFO : Ignition 2.14.0 Jul 2 11:09:47.146632 ignition[1051]: INFO : Stage: files Jul 2 11:09:47.146632 ignition[1051]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 11:09:47.146632 ignition[1051]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 11:09:47.146632 ignition[1051]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 11:09:47.150395 unknown[1051]: wrote ssh authorized keys file for user: core Jul 2 11:09:47.214690 ignition[1051]: DEBUG : files: compiled without relabeling support, skipping Jul 2 11:09:47.214690 ignition[1051]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 11:09:47.214690 ignition[1051]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 11:09:47.214690 ignition[1051]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 11:09:47.214690 ignition[1051]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 11:09:47.214690 ignition[1051]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 11:09:47.214690 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 11:09:47.214690 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 11:09:47.821453 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 11:09:47.882013 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 11:09:47.898795 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 11:09:47.898795 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 11:09:48.392294 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 11:09:48.431836 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 11:09:48.431836 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 11:09:48.481722 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1055) Jul 2 11:09:48.481791 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 11:09:48.481791 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 11:09:48.481791 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 11:09:48.481791 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 11:09:48.481791 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 11:09:48.481791 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 11:09:48.481791 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 11:09:48.481791 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 11:09:48.481791 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 11:09:48.481791 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 11:09:48.481791 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 11:09:48.481791 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Jul 2 11:09:48.481791 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 11:09:48.481791 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1596609126" Jul 2 11:09:48.481791 ignition[1051]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1596609126": device or resource busy Jul 2 11:09:48.743818 ignition[1051]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1596609126", trying btrfs: device or resource busy Jul 2 11:09:48.743818 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1596609126" Jul 2 11:09:48.743818 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1596609126" Jul 2 11:09:48.743818 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1596609126" Jul 2 11:09:48.743818 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1596609126" Jul 2 11:09:48.743818 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Jul 2 11:09:48.743818 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 11:09:48.743818 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jul 2 11:09:48.891716 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Jul 2 11:09:49.041831 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 11:09:49.041831 ignition[1051]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 11:09:49.041831 ignition[1051]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 11:09:49.041831 ignition[1051]: INFO : files: op(11): [started] processing unit "packet-phone-home.service" Jul 2 11:09:49.041831 ignition[1051]: INFO : files: op(11): [finished] processing unit "packet-phone-home.service" Jul 2 11:09:49.041831 ignition[1051]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Jul 2 11:09:49.124665 ignition[1051]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 11:09:49.124665 ignition[1051]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 11:09:49.124665 ignition[1051]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Jul 2 11:09:49.124665 ignition[1051]: INFO : files: op(14): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 11:09:49.124665 ignition[1051]: INFO : files: op(14): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 11:09:49.124665 ignition[1051]: INFO : files: op(15): [started] setting preset to enabled for "packet-phone-home.service" Jul 2 11:09:49.124665 ignition[1051]: INFO : files: op(15): [finished] setting preset to enabled for "packet-phone-home.service" Jul 2 11:09:49.124665 ignition[1051]: INFO : files: op(16): [started] setting preset to enabled for "prepare-helm.service" Jul 2 11:09:49.124665 ignition[1051]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 11:09:49.124665 ignition[1051]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 11:09:49.124665 ignition[1051]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 11:09:49.124665 ignition[1051]: INFO : files: files passed Jul 2 11:09:49.124665 ignition[1051]: INFO : POST message to Packet Timeline Jul 2 11:09:49.124665 ignition[1051]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 2 11:09:49.124665 ignition[1051]: INFO : GET result: OK Jul 2 11:09:49.395692 kernel: audit: type=1130 audit(1719918589.295:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.280418 systemd[1]: Finished ignition-files.service. Jul 2 11:09:49.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.412743 ignition[1051]: INFO : Ignition finished successfully Jul 2 11:09:49.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.302528 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 11:09:49.446742 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 11:09:49.362749 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 11:09:49.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.363055 systemd[1]: Starting ignition-quench.service... Jul 2 11:09:49.388782 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 11:09:49.405770 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 11:09:49.405818 systemd[1]: Finished ignition-quench.service. Jul 2 11:09:49.420807 systemd[1]: Reached target ignition-complete.target. Jul 2 11:09:49.437584 systemd[1]: Starting initrd-parse-etc.service... Jul 2 11:09:49.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.459430 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 11:09:49.459494 systemd[1]: Finished initrd-parse-etc.service. Jul 2 11:09:49.475816 systemd[1]: Reached target initrd-fs.target. Jul 2 11:09:49.500736 systemd[1]: Reached target initrd.target. Jul 2 11:09:49.515889 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 11:09:49.518048 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 11:09:49.551805 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 11:09:49.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.564605 systemd[1]: Starting initrd-cleanup.service... Jul 2 11:09:49.594199 systemd[1]: Stopped target nss-lookup.target. Jul 2 11:09:49.607978 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 11:09:49.624245 systemd[1]: Stopped target timers.target. Jul 2 11:09:49.643060 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 11:09:49.643424 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 11:09:49.659368 systemd[1]: Stopped target initrd.target. Jul 2 11:09:49.673077 systemd[1]: Stopped target basic.target. Jul 2 11:09:49.687078 systemd[1]: Stopped target ignition-complete.target. Jul 2 11:09:49.702080 systemd[1]: Stopped target ignition-diskful.target. Jul 2 11:09:49.719074 systemd[1]: Stopped target initrd-root-device.target. Jul 2 11:09:49.734188 systemd[1]: Stopped target remote-fs.target. Jul 2 11:09:49.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.753176 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 11:09:49.770208 systemd[1]: Stopped target sysinit.target. Jul 2 11:09:49.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.785092 systemd[1]: Stopped target local-fs.target. Jul 2 11:09:49.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.800072 systemd[1]: Stopped target local-fs-pre.target. Jul 2 11:09:49.818064 systemd[1]: Stopped target swap.target. Jul 2 11:09:49.833958 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 11:09:49.834325 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 11:09:49.851302 systemd[1]: Stopped target cryptsetup.target. Jul 2 11:09:49.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.866983 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 11:09:49.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.867351 systemd[1]: Stopped dracut-initqueue.service. Jul 2 11:09:50.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.882227 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 11:09:50.031761 ignition[1099]: INFO : Ignition 2.14.0 Jul 2 11:09:50.031761 ignition[1099]: INFO : Stage: umount Jul 2 11:09:50.031761 ignition[1099]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 11:09:50.031761 ignition[1099]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 11:09:50.031761 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 11:09:50.031761 ignition[1099]: INFO : umount: umount passed Jul 2 11:09:50.031761 ignition[1099]: INFO : POST message to Packet Timeline Jul 2 11:09:50.031761 ignition[1099]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 2 11:09:50.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.152770 iscsid[899]: iscsid shutting down. Jul 2 11:09:49.882612 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 11:09:50.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.183032 ignition[1099]: INFO : GET result: OK Jul 2 11:09:50.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.898274 systemd[1]: Stopped target paths.target. Jul 2 11:09:49.911938 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 11:09:49.915684 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 11:09:49.929085 systemd[1]: Stopped target slices.target. Jul 2 11:09:49.943172 systemd[1]: Stopped target sockets.target. Jul 2 11:09:50.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.276999 ignition[1099]: INFO : Ignition finished successfully Jul 2 11:09:50.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.960073 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 11:09:50.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.960465 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 11:09:50.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.318000 audit: BPF prog-id=6 op=UNLOAD Jul 2 11:09:49.977284 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 11:09:49.977664 systemd[1]: Stopped ignition-files.service. Jul 2 11:09:50.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.992130 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 11:09:50.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:49.992508 systemd[1]: Stopped flatcar-metadata-hostname.service. Jul 2 11:09:50.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.010086 systemd[1]: Stopping ignition-mount.service... Jul 2 11:09:50.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.022149 systemd[1]: Stopping iscsid.service... Jul 2 11:09:50.039250 systemd[1]: Stopping sysroot-boot.service... Jul 2 11:09:50.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.057585 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 11:09:50.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.057729 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 11:09:50.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.077872 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 11:09:50.078022 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 11:09:50.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.111586 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 11:09:50.111887 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 11:09:50.111938 systemd[1]: Stopped iscsid.service. Jul 2 11:09:50.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.130002 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 11:09:50.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.130065 systemd[1]: Closed iscsid.socket. Jul 2 11:09:50.658043 kernel: kauditd_printk_skb: 34 callbacks suppressed Jul 2 11:09:50.658058 kernel: audit: type=1131 audit(1719918590.577:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.137802 systemd[1]: Stopping iscsiuio.service... Jul 2 11:09:50.159910 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 11:09:50.745564 kernel: audit: type=1131 audit(1719918590.680:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.160029 systemd[1]: Stopped iscsiuio.service. Jul 2 11:09:50.816612 kernel: audit: type=1131 audit(1719918590.753:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.174120 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 11:09:50.881493 kernel: audit: type=1131 audit(1719918590.824:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.174330 systemd[1]: Finished initrd-cleanup.service. Jul 2 11:09:50.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.192789 systemd[1]: Stopped target network.target. Jul 2 11:09:51.019665 kernel: audit: type=1130 audit(1719918590.896:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:51.019677 kernel: audit: type=1131 audit(1719918590.896:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.205783 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 11:09:50.205886 systemd[1]: Closed iscsiuio.socket. Jul 2 11:09:50.219999 systemd[1]: Stopping systemd-networkd.service... Jul 2 11:09:50.230610 systemd-networkd[879]: enp1s0f1np1: DHCPv6 lease lost Jul 2 11:09:50.236913 systemd[1]: Stopping systemd-resolved.service... Jul 2 11:09:50.245650 systemd-networkd[879]: enp1s0f0np0: DHCPv6 lease lost Jul 2 11:09:51.058000 audit: BPF prog-id=9 op=UNLOAD Jul 2 11:09:50.253490 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 11:09:51.092562 kernel: audit: type=1334 audit(1719918591.058:81): prog-id=9 op=UNLOAD Jul 2 11:09:50.253721 systemd[1]: Stopped sysroot-boot.service. Jul 2 11:09:50.269472 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 11:09:50.269738 systemd[1]: Stopped systemd-resolved.service. Jul 2 11:09:50.285490 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 11:09:50.285743 systemd[1]: Stopped systemd-networkd.service. Jul 2 11:09:50.301428 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 11:09:50.301667 systemd[1]: Stopped ignition-mount.service. Jul 2 11:09:50.319254 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 11:09:50.319346 systemd[1]: Closed systemd-networkd.socket. Jul 2 11:09:50.334831 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 11:09:51.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.334961 systemd[1]: Stopped ignition-disks.service. Jul 2 11:09:51.251689 kernel: audit: type=1131 audit(1719918591.168:82): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:50.350894 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 11:09:50.351017 systemd[1]: Stopped ignition-kargs.service. Jul 2 11:09:50.365976 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 11:09:50.366131 systemd[1]: Stopped ignition-setup.service. Jul 2 11:09:50.383986 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 11:09:50.384131 systemd[1]: Stopped initrd-setup-root.service. Jul 2 11:09:50.401792 systemd[1]: Stopping network-cleanup.service... Jul 2 11:09:50.413690 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 11:09:50.413930 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 11:09:50.428944 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 11:09:50.429078 systemd[1]: Stopped systemd-sysctl.service. Jul 2 11:09:50.444148 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 11:09:50.444294 systemd[1]: Stopped systemd-modules-load.service. Jul 2 11:09:50.462220 systemd[1]: Stopping systemd-udevd.service... Jul 2 11:09:50.480756 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 11:09:50.482332 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 11:09:50.482667 systemd[1]: Stopped systemd-udevd.service. Jul 2 11:09:50.497415 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 11:09:50.497569 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 11:09:50.511839 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 11:09:50.511939 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 11:09:50.528771 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 11:09:50.528902 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 11:09:50.545954 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 11:09:50.546101 systemd[1]: Stopped dracut-cmdline.service. Jul 2 11:09:50.561838 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 11:09:50.561986 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 11:09:50.580138 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 11:09:50.666563 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 11:09:50.666590 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 11:09:50.737778 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 11:09:50.737802 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 11:09:50.753614 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 11:09:50.753671 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 11:09:50.825686 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 11:09:50.826194 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 11:09:50.826276 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 11:09:51.154150 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 11:09:51.306482 systemd-journald[268]: Received SIGTERM from PID 1 (n/a). Jul 2 11:09:51.154390 systemd[1]: Stopped network-cleanup.service. Jul 2 11:09:51.169007 systemd[1]: Reached target initrd-switch-root.target. Jul 2 11:09:51.244996 systemd[1]: Starting initrd-switch-root.service... Jul 2 11:09:51.263275 systemd[1]: Switching root. Jul 2 11:09:51.306608 systemd-journald[268]: Journal stopped Jul 2 11:09:55.068169 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 11:09:55.068182 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 11:09:55.068191 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 11:09:55.068196 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 11:09:55.068201 kernel: SELinux: policy capability open_perms=1 Jul 2 11:09:55.068206 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 11:09:55.068213 kernel: SELinux: policy capability always_check_network=0 Jul 2 11:09:55.068218 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 11:09:55.068223 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 11:09:55.068229 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 11:09:55.068235 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 11:09:55.068240 kernel: audit: type=1403 audit(1719918591.735:83): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 11:09:55.068246 systemd[1]: Successfully loaded SELinux policy in 314.225ms. Jul 2 11:09:55.068253 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.121ms. Jul 2 11:09:55.068262 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 11:09:55.068268 systemd[1]: Detected architecture x86-64. Jul 2 11:09:55.068274 systemd[1]: Detected first boot. Jul 2 11:09:55.068280 systemd[1]: Hostname set to . Jul 2 11:09:55.068286 systemd[1]: Initializing machine ID from random generator. Jul 2 11:09:55.068292 kernel: audit: type=1400 audit(1719918592.026:84): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 11:09:55.068298 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 11:09:55.068305 systemd[1]: Populated /etc with preset unit settings. Jul 2 11:09:55.068311 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 11:09:55.068318 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 11:09:55.068324 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 11:09:55.068330 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 11:09:55.068336 systemd[1]: Stopped initrd-switch-root.service. Jul 2 11:09:55.068343 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 11:09:55.068350 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 11:09:55.068356 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 11:09:55.068363 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 11:09:55.068369 systemd[1]: Created slice system-getty.slice. Jul 2 11:09:55.068375 systemd[1]: Created slice system-modprobe.slice. Jul 2 11:09:55.068381 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 11:09:55.068387 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 11:09:55.068394 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 11:09:55.068400 systemd[1]: Created slice user.slice. Jul 2 11:09:55.068406 systemd[1]: Started systemd-ask-password-console.path. Jul 2 11:09:55.068412 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 11:09:55.068418 systemd[1]: Set up automount boot.automount. Jul 2 11:09:55.068426 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 11:09:55.068432 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 11:09:55.068439 systemd[1]: Stopped target initrd-fs.target. Jul 2 11:09:55.068445 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 11:09:55.068452 systemd[1]: Reached target integritysetup.target. Jul 2 11:09:55.068459 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 11:09:55.068465 systemd[1]: Reached target remote-fs.target. Jul 2 11:09:55.068471 systemd[1]: Reached target slices.target. Jul 2 11:09:55.068480 systemd[1]: Reached target swap.target. Jul 2 11:09:55.068486 systemd[1]: Reached target torcx.target. Jul 2 11:09:55.068492 systemd[1]: Reached target veritysetup.target. Jul 2 11:09:55.068499 systemd[1]: Listening on systemd-coredump.socket. Jul 2 11:09:55.068506 systemd[1]: Listening on systemd-initctl.socket. Jul 2 11:09:55.068512 systemd[1]: Listening on systemd-networkd.socket. Jul 2 11:09:55.068519 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 11:09:55.068526 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 11:09:55.068533 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 11:09:55.068540 systemd[1]: Mounting dev-hugepages.mount... Jul 2 11:09:55.068546 systemd[1]: Mounting dev-mqueue.mount... Jul 2 11:09:55.068553 systemd[1]: Mounting media.mount... Jul 2 11:09:55.068559 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:09:55.068566 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 11:09:55.068572 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 11:09:55.068578 systemd[1]: Mounting tmp.mount... Jul 2 11:09:55.068585 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 11:09:55.068592 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 11:09:55.068599 systemd[1]: Starting kmod-static-nodes.service... Jul 2 11:09:55.068605 systemd[1]: Starting modprobe@configfs.service... Jul 2 11:09:55.068611 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 11:09:55.068618 systemd[1]: Starting modprobe@drm.service... Jul 2 11:09:55.068624 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 11:09:55.068631 systemd[1]: Starting modprobe@fuse.service... Jul 2 11:09:55.068637 kernel: fuse: init (API version 7.34) Jul 2 11:09:55.068643 systemd[1]: Starting modprobe@loop.service... Jul 2 11:09:55.068650 kernel: loop: module loaded Jul 2 11:09:55.068656 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 11:09:55.068663 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 11:09:55.068669 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 11:09:55.068675 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 11:09:55.068682 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 11:09:55.068688 systemd[1]: Stopped systemd-journald.service. Jul 2 11:09:55.068695 systemd[1]: Starting systemd-journald.service... Jul 2 11:09:55.068701 systemd[1]: Starting systemd-modules-load.service... Jul 2 11:09:55.068710 systemd-journald[1251]: Journal started Jul 2 11:09:55.068734 systemd-journald[1251]: Runtime Journal (/run/log/journal/1d040a5da2794cf9ba27a8d3f6216f50) is 8.0M, max 640.0M, 632.0M free. Jul 2 11:09:51.735000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 11:09:52.026000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 11:09:52.081000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 11:09:52.081000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 11:09:52.081000 audit: BPF prog-id=10 op=LOAD Jul 2 11:09:52.081000 audit: BPF prog-id=10 op=UNLOAD Jul 2 11:09:52.081000 audit: BPF prog-id=11 op=LOAD Jul 2 11:09:52.081000 audit: BPF prog-id=11 op=UNLOAD Jul 2 11:09:52.150000 audit[1141]: AVC avc: denied { associate } for pid=1141 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 11:09:52.150000 audit[1141]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a78e2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1124 pid=1141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 11:09:52.150000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 11:09:52.176000 audit[1141]: AVC avc: denied { associate } for pid=1141 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 11:09:52.176000 audit[1141]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a79b9 a2=1ed a3=0 items=2 ppid=1124 pid=1141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 11:09:52.176000 audit: CWD cwd="/" Jul 2 11:09:52.176000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:52.176000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:52.176000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 11:09:53.708000 audit: BPF prog-id=12 op=LOAD Jul 2 11:09:53.708000 audit: BPF prog-id=3 op=UNLOAD Jul 2 11:09:53.708000 audit: BPF prog-id=13 op=LOAD Jul 2 11:09:53.708000 audit: BPF prog-id=14 op=LOAD Jul 2 11:09:53.708000 audit: BPF prog-id=4 op=UNLOAD Jul 2 11:09:53.708000 audit: BPF prog-id=5 op=UNLOAD Jul 2 11:09:53.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:53.755000 audit: BPF prog-id=12 op=UNLOAD Jul 2 11:09:53.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:53.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:54.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.040000 audit: BPF prog-id=15 op=LOAD Jul 2 11:09:55.041000 audit: BPF prog-id=16 op=LOAD Jul 2 11:09:55.041000 audit: BPF prog-id=17 op=LOAD Jul 2 11:09:55.041000 audit: BPF prog-id=13 op=UNLOAD Jul 2 11:09:55.041000 audit: BPF prog-id=14 op=UNLOAD Jul 2 11:09:55.065000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 11:09:55.065000 audit[1251]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc8d67d840 a2=4000 a3=7ffc8d67d8dc items=0 ppid=1 pid=1251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 11:09:55.065000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 11:09:53.706768 systemd[1]: Queued start job for default target multi-user.target. Jul 2 11:09:52.148121 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 11:09:53.709340 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 11:09:52.148596 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:52Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 11:09:52.148615 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:52Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 11:09:52.148641 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:52Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 11:09:52.148650 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:52Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 11:09:52.148675 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:52Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 11:09:52.148686 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:52Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 11:09:52.148852 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:52Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 11:09:52.148886 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:52Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 11:09:52.148897 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:52Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 11:09:52.150205 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:52Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 11:09:52.150237 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:52Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 11:09:52.150253 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:52Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 11:09:52.150265 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:52Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 11:09:52.150280 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:52Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 11:09:52.150292 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:52Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 11:09:53.354057 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:53Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 11:09:53.354196 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:53Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 11:09:53.354254 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:53Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 11:09:53.354347 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:53Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 11:09:53.354377 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:53Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 11:09:53.354411 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-07-02T11:09:53Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 11:09:55.099674 systemd[1]: Starting systemd-network-generator.service... Jul 2 11:09:55.121512 systemd[1]: Starting systemd-remount-fs.service... Jul 2 11:09:55.142511 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 11:09:55.176066 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 11:09:55.176086 systemd[1]: Stopped verity-setup.service. Jul 2 11:09:55.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.210527 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:09:55.225678 systemd[1]: Started systemd-journald.service. Jul 2 11:09:55.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.234142 systemd[1]: Mounted dev-hugepages.mount. Jul 2 11:09:55.241771 systemd[1]: Mounted dev-mqueue.mount. Jul 2 11:09:55.248757 systemd[1]: Mounted media.mount. Jul 2 11:09:55.255750 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 11:09:55.264751 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 11:09:55.273729 systemd[1]: Mounted tmp.mount. Jul 2 11:09:55.280843 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 11:09:55.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.289838 systemd[1]: Finished kmod-static-nodes.service. Jul 2 11:09:55.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.298865 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 11:09:55.298978 systemd[1]: Finished modprobe@configfs.service. Jul 2 11:09:55.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.307945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 11:09:55.308087 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 11:09:55.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.318082 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 11:09:55.318285 systemd[1]: Finished modprobe@drm.service. Jul 2 11:09:55.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.327501 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 11:09:55.327919 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 11:09:55.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.337454 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 11:09:55.337872 systemd[1]: Finished modprobe@fuse.service. Jul 2 11:09:55.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.347426 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 11:09:55.347840 systemd[1]: Finished modprobe@loop.service. Jul 2 11:09:55.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.358364 systemd[1]: Finished systemd-modules-load.service. Jul 2 11:09:55.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.367305 systemd[1]: Finished systemd-network-generator.service. Jul 2 11:09:55.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.376302 systemd[1]: Finished systemd-remount-fs.service. Jul 2 11:09:55.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.385293 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 11:09:55.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.394837 systemd[1]: Reached target network-pre.target. Jul 2 11:09:55.406442 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 11:09:55.416210 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 11:09:55.424707 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 11:09:55.425694 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 11:09:55.433172 systemd[1]: Starting systemd-journal-flush.service... Jul 2 11:09:55.437022 systemd-journald[1251]: Time spent on flushing to /var/log/journal/1d040a5da2794cf9ba27a8d3f6216f50 is 14.457ms for 1578 entries. Jul 2 11:09:55.437022 systemd-journald[1251]: System Journal (/var/log/journal/1d040a5da2794cf9ba27a8d3f6216f50) is 8.0M, max 195.6M, 187.6M free. Jul 2 11:09:55.477200 systemd-journald[1251]: Received client request to flush runtime journal. Jul 2 11:09:55.449605 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 11:09:55.450075 systemd[1]: Starting systemd-random-seed.service... Jul 2 11:09:55.460609 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 11:09:55.461099 systemd[1]: Starting systemd-sysctl.service... Jul 2 11:09:55.468092 systemd[1]: Starting systemd-sysusers.service... Jul 2 11:09:55.475096 systemd[1]: Starting systemd-udev-settle.service... Jul 2 11:09:55.482654 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 11:09:55.491677 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 11:09:55.500694 systemd[1]: Finished systemd-journal-flush.service. Jul 2 11:09:55.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.508708 systemd[1]: Finished systemd-random-seed.service. Jul 2 11:09:55.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.516685 systemd[1]: Finished systemd-sysctl.service. Jul 2 11:09:55.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.524679 systemd[1]: Finished systemd-sysusers.service. Jul 2 11:09:55.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.533712 systemd[1]: Reached target first-boot-complete.target. Jul 2 11:09:55.542254 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 11:09:55.551614 udevadm[1267]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 11:09:55.559787 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 11:09:55.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.733342 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 11:09:55.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.755668 kernel: kauditd_printk_skb: 62 callbacks suppressed Jul 2 11:09:55.755758 kernel: audit: type=1130 audit(1719918595.741:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.796000 audit: BPF prog-id=18 op=LOAD Jul 2 11:09:55.813507 kernel: audit: type=1334 audit(1719918595.796:139): prog-id=18 op=LOAD Jul 2 11:09:55.813554 kernel: audit: type=1334 audit(1719918595.813:140): prog-id=19 op=LOAD Jul 2 11:09:55.813000 audit: BPF prog-id=19 op=LOAD Jul 2 11:09:55.813845 systemd[1]: Starting systemd-udevd.service... Jul 2 11:09:55.813000 audit: BPF prog-id=7 op=UNLOAD Jul 2 11:09:55.813000 audit: BPF prog-id=8 op=UNLOAD Jul 2 11:09:55.830526 kernel: audit: type=1334 audit(1719918595.813:141): prog-id=7 op=UNLOAD Jul 2 11:09:55.830561 kernel: audit: type=1334 audit(1719918595.813:142): prog-id=8 op=UNLOAD Jul 2 11:09:55.875109 systemd-udevd[1270]: Using default interface naming scheme 'v252'. Jul 2 11:09:55.890215 systemd[1]: Started systemd-udevd.service. Jul 2 11:09:55.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.900897 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Jul 2 11:09:55.940498 kernel: audit: type=1130 audit(1719918595.898:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:55.940616 kernel: audit: type=1334 audit(1719918595.939:144): prog-id=20 op=LOAD Jul 2 11:09:55.939000 audit: BPF prog-id=20 op=LOAD Jul 2 11:09:55.941088 systemd[1]: Starting systemd-networkd.service... Jul 2 11:09:55.958489 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jul 2 11:09:55.977487 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1315) Jul 2 11:09:55.977570 kernel: ACPI: button: Sleep Button [SLPB] Jul 2 11:09:55.977598 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 11:09:56.037514 kernel: IPMI message handler: version 39.2 Jul 2 11:09:56.037560 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 11:09:56.037573 kernel: ACPI: button: Power Button [PWRF] Jul 2 11:09:56.053015 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 11:09:55.935000 audit[1271]: AVC avc: denied { confidentiality } for pid=1271 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 11:09:56.082482 kernel: audit: type=1400 audit(1719918595.935:145): avc: denied { confidentiality } for pid=1271 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 11:09:56.141000 audit: BPF prog-id=21 op=LOAD Jul 2 11:09:56.160481 kernel: audit: type=1334 audit(1719918596.141:146): prog-id=21 op=LOAD Jul 2 11:09:56.160514 kernel: audit: type=1334 audit(1719918596.160:147): prog-id=22 op=LOAD Jul 2 11:09:56.160000 audit: BPF prog-id=22 op=LOAD Jul 2 11:09:56.178000 audit: BPF prog-id=23 op=LOAD Jul 2 11:09:56.182429 systemd[1]: Starting systemd-userdbd.service... Jul 2 11:09:55.935000 audit[1271]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55dd3b28fbe0 a1=4d8bc a2=7f793ed31bc5 a3=5 items=42 ppid=1270 pid=1271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 11:09:55.935000 audit: CWD cwd="/" Jul 2 11:09:55.935000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=1 name=(null) inode=21745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=2 name=(null) inode=21745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=3 name=(null) inode=21746 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=4 name=(null) inode=21745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=5 name=(null) inode=21747 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=6 name=(null) inode=21745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=7 name=(null) inode=21748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=8 name=(null) inode=21748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=9 name=(null) inode=21749 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=10 name=(null) inode=21748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=11 name=(null) inode=21750 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=12 name=(null) inode=21748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=13 name=(null) inode=21751 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=14 name=(null) inode=21748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=15 name=(null) inode=21752 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=16 name=(null) inode=21748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=17 name=(null) inode=21753 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=18 name=(null) inode=21745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=19 name=(null) inode=21754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=20 name=(null) inode=21754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=21 name=(null) inode=21755 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=22 name=(null) inode=21754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=23 name=(null) inode=21756 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=24 name=(null) inode=21754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=25 name=(null) inode=21757 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=26 name=(null) inode=21754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=27 name=(null) inode=21758 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=28 name=(null) inode=21754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=29 name=(null) inode=21759 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=30 name=(null) inode=21745 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=31 name=(null) inode=21760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=32 name=(null) inode=21760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=33 name=(null) inode=21761 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=34 name=(null) inode=21760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=35 name=(null) inode=21762 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=36 name=(null) inode=21760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=37 name=(null) inode=21763 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=38 name=(null) inode=21760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=39 name=(null) inode=21764 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=40 name=(null) inode=21760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PATH item=41 name=(null) inode=21765 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:09:55.935000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 11:09:56.214524 kernel: ipmi device interface Jul 2 11:09:56.241508 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jul 2 11:09:56.241872 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jul 2 11:09:56.245280 systemd[1]: Started systemd-userdbd.service. Jul 2 11:09:56.262489 kernel: i2c i2c-0: 1/4 memory slots populated (from DMI) Jul 2 11:09:56.262802 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jul 2 11:09:56.263001 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jul 2 11:09:56.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:56.340485 kernel: iTCO_vendor_support: vendor-support=0 Jul 2 11:09:56.340519 kernel: ipmi_si: IPMI System Interface driver Jul 2 11:09:56.374272 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jul 2 11:09:56.374425 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jul 2 11:09:56.410435 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jul 2 11:09:56.410530 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jul 2 11:09:56.447282 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jul 2 11:09:56.489490 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jul 2 11:09:56.489576 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jul 2 11:09:56.489592 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jul 2 11:09:56.532492 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Jul 2 11:09:56.532580 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jul 2 11:09:56.532657 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jul 2 11:09:56.588484 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Jul 2 11:09:56.662634 systemd-networkd[1325]: bond0: netdev ready Jul 2 11:09:56.667630 kernel: intel_rapl_common: Found RAPL domain package Jul 2 11:09:56.667662 kernel: intel_rapl_common: Found RAPL domain core Jul 2 11:09:56.667674 kernel: intel_rapl_common: Found RAPL domain dram Jul 2 11:09:56.672235 systemd-networkd[1325]: lo: Link UP Jul 2 11:09:56.672243 systemd-networkd[1325]: lo: Gained carrier Jul 2 11:09:56.673629 systemd-networkd[1325]: Enumeration completed Jul 2 11:09:56.673718 systemd[1]: Started systemd-networkd.service. Jul 2 11:09:56.674806 systemd-networkd[1325]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jul 2 11:09:56.685150 systemd-networkd[1325]: enp1s0f1np1: Configuring with /etc/systemd/network/10-b8:59:9f:e0:f7:a5.network. Jul 2 11:09:56.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:56.734521 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jul 2 11:09:56.754522 kernel: ipmi_ssif: IPMI SSIF Interface driver Jul 2 11:09:56.760727 systemd[1]: Finished systemd-udev-settle.service. Jul 2 11:09:56.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:56.770209 systemd[1]: Starting lvm2-activation-early.service... Jul 2 11:09:56.786592 lvm[1375]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 11:09:56.809852 systemd[1]: Finished lvm2-activation-early.service. Jul 2 11:09:56.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:56.818558 systemd[1]: Reached target cryptsetup.target. Jul 2 11:09:56.828110 systemd[1]: Starting lvm2-activation.service... Jul 2 11:09:56.830085 lvm[1376]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 11:09:56.859833 systemd[1]: Finished lvm2-activation.service. Jul 2 11:09:56.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:56.868569 systemd[1]: Reached target local-fs-pre.target. Jul 2 11:09:56.877529 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 11:09:56.877554 systemd[1]: Reached target local-fs.target. Jul 2 11:09:56.886526 systemd[1]: Reached target machines.target. Jul 2 11:09:56.896140 systemd[1]: Starting ldconfig.service... Jul 2 11:09:56.903100 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 11:09:56.903121 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 11:09:56.903687 systemd[1]: Starting systemd-boot-update.service... Jul 2 11:09:56.911007 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 11:09:56.921128 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 11:09:56.921819 systemd[1]: Starting systemd-sysext.service... Jul 2 11:09:56.922091 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1378 (bootctl) Jul 2 11:09:56.922733 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 11:09:56.928269 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 11:09:56.943518 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 11:09:56.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:56.943738 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 11:09:56.943816 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 11:09:56.986513 kernel: loop0: detected capacity change from 0 to 210664 Jul 2 11:09:57.062672 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 11:09:57.062992 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 11:09:57.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:57.097485 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 11:09:57.111865 systemd-fsck[1387]: fsck.fat 4.2 (2021-01-31) Jul 2 11:09:57.111865 systemd-fsck[1387]: /dev/sda1: 789 files, 119238/258078 clusters Jul 2 11:09:57.112778 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 11:09:57.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:57.124502 systemd[1]: Mounting boot.mount... Jul 2 11:09:57.141527 kernel: loop1: detected capacity change from 0 to 210664 Jul 2 11:09:57.148335 systemd[1]: Mounted boot.mount. Jul 2 11:09:57.162471 (sd-sysext)[1390]: Using extensions 'kubernetes'. Jul 2 11:09:57.162713 (sd-sysext)[1390]: Merged extensions into '/usr'. Jul 2 11:09:57.172000 systemd[1]: Finished systemd-boot-update.service. Jul 2 11:09:57.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:57.180718 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:09:57.181444 systemd[1]: Mounting usr-share-oem.mount... Jul 2 11:09:57.189661 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 11:09:57.190340 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 11:09:57.197027 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 11:09:57.204037 systemd[1]: Starting modprobe@loop.service... Jul 2 11:09:57.210573 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 11:09:57.210637 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 11:09:57.210703 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:09:57.212265 systemd[1]: Mounted usr-share-oem.mount. Jul 2 11:09:57.218743 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 11:09:57.218806 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 11:09:57.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:57.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:57.226767 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 11:09:57.226828 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 11:09:57.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:57.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:57.235741 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 11:09:57.235802 systemd[1]: Finished modprobe@loop.service. Jul 2 11:09:57.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:57.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:57.243779 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 11:09:57.243838 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 11:09:57.244332 systemd[1]: Finished systemd-sysext.service. Jul 2 11:09:57.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:57.253069 systemd[1]: Starting ensure-sysext.service... Jul 2 11:09:57.260020 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 11:09:57.265696 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 11:09:57.266353 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 11:09:57.267407 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 11:09:57.269827 systemd[1]: Reloading. Jul 2 11:09:57.296426 /usr/lib/systemd/system-generators/torcx-generator[1417]: time="2024-07-02T11:09:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 11:09:57.296452 /usr/lib/systemd/system-generators/torcx-generator[1417]: time="2024-07-02T11:09:57Z" level=info msg="torcx already run" Jul 2 11:09:57.304752 ldconfig[1377]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 11:09:57.348687 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 11:09:57.348694 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 11:09:57.359828 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 11:09:57.401000 audit: BPF prog-id=24 op=LOAD Jul 2 11:09:57.401000 audit: BPF prog-id=20 op=UNLOAD Jul 2 11:09:57.402000 audit: BPF prog-id=25 op=LOAD Jul 2 11:09:57.402000 audit: BPF prog-id=15 op=UNLOAD Jul 2 11:09:57.402000 audit: BPF prog-id=26 op=LOAD Jul 2 11:09:57.402000 audit: BPF prog-id=27 op=LOAD Jul 2 11:09:57.402000 audit: BPF prog-id=16 op=UNLOAD Jul 2 11:09:57.402000 audit: BPF prog-id=17 op=UNLOAD Jul 2 11:09:57.402000 audit: BPF prog-id=28 op=LOAD Jul 2 11:09:57.402000 audit: BPF prog-id=29 op=LOAD Jul 2 11:09:57.402000 audit: BPF prog-id=18 op=UNLOAD Jul 2 11:09:57.402000 audit: BPF prog-id=19 op=UNLOAD Jul 2 11:09:57.403000 audit: BPF prog-id=30 op=LOAD Jul 2 11:09:57.403000 audit: BPF prog-id=21 op=UNLOAD Jul 2 11:09:57.403000 audit: BPF prog-id=31 op=LOAD Jul 2 11:09:57.403000 audit: BPF prog-id=32 op=LOAD Jul 2 11:09:57.403000 audit: BPF prog-id=22 op=UNLOAD Jul 2 11:09:57.403000 audit: BPF prog-id=23 op=UNLOAD Jul 2 11:09:57.405302 systemd[1]: Finished ldconfig.service. Jul 2 11:09:57.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:57.413122 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 11:09:57.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:09:57.425028 systemd[1]: Starting audit-rules.service... Jul 2 11:09:57.433111 systemd[1]: Starting clean-ca-certificates.service... Jul 2 11:09:57.441000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 11:09:57.441000 audit[1494]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdcbebb830 a2=420 a3=0 items=0 ppid=1478 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 11:09:57.441000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 11:09:57.442529 augenrules[1494]: No rules Jul 2 11:09:57.443215 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 11:09:57.453511 systemd[1]: Starting systemd-resolved.service... Jul 2 11:09:57.462488 systemd[1]: Starting systemd-timesyncd.service... Jul 2 11:09:57.471096 systemd[1]: Starting systemd-update-utmp.service... Jul 2 11:09:57.478989 systemd[1]: Finished audit-rules.service. Jul 2 11:09:57.486718 systemd[1]: Finished clean-ca-certificates.service. Jul 2 11:09:57.495721 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 11:09:57.509288 systemd[1]: Finished systemd-update-utmp.service. Jul 2 11:09:57.518052 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 11:09:57.518689 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 11:09:57.526047 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 11:09:57.534053 systemd[1]: Starting modprobe@loop.service... Jul 2 11:09:57.538532 systemd-resolved[1500]: Positive Trust Anchors: Jul 2 11:09:57.538539 systemd-resolved[1500]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 11:09:57.538558 systemd-resolved[1500]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 11:09:57.540565 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 11:09:57.540633 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 11:09:57.541380 systemd[1]: Starting systemd-update-done.service... Jul 2 11:09:57.542735 systemd-resolved[1500]: Using system hostname 'ci-3510.3.5-a-3a013adf74'. Jul 2 11:09:57.549519 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 11:09:57.550046 systemd[1]: Started systemd-timesyncd.service. Jul 2 11:09:57.559763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 11:09:57.559832 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 11:09:57.568730 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 11:09:57.568794 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 11:09:57.576691 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 11:09:57.576752 systemd[1]: Finished modprobe@loop.service. Jul 2 11:09:57.584687 systemd[1]: Finished systemd-update-done.service. Jul 2 11:09:57.592712 systemd[1]: Reached target time-set.target. Jul 2 11:09:57.600541 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 11:09:57.600594 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 11:09:57.602027 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 11:09:57.602642 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 11:09:57.610059 systemd[1]: Starting modprobe@drm.service... Jul 2 11:09:57.617021 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 11:09:57.624041 systemd[1]: Starting modprobe@loop.service... Jul 2 11:09:57.630560 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 11:09:57.630628 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 11:09:57.631228 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 11:09:57.639531 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 11:09:57.640139 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 11:09:57.640206 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 11:09:57.648711 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 11:09:57.648771 systemd[1]: Finished modprobe@drm.service. Jul 2 11:09:57.656693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 11:09:57.656752 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 11:09:57.664690 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 11:09:57.664748 systemd[1]: Finished modprobe@loop.service. Jul 2 11:09:57.672773 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 11:09:57.672831 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 11:09:57.673371 systemd[1]: Finished ensure-sysext.service. Jul 2 11:09:57.703522 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 2 11:09:57.727481 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Jul 2 11:09:57.727506 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 2 11:09:57.749555 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Jul 2 11:09:57.768552 systemd-networkd[1325]: enp1s0f0np0: Configuring with /etc/systemd/network/10-b8:59:9f:e0:f7:a4.network. Jul 2 11:09:57.769065 systemd[1]: Started systemd-resolved.service. Jul 2 11:09:57.777748 systemd[1]: Reached target network.target. Jul 2 11:09:57.785556 systemd[1]: Reached target nss-lookup.target. Jul 2 11:09:57.793689 systemd[1]: Reached target sysinit.target. Jul 2 11:09:57.801683 systemd[1]: Started motdgen.path. Jul 2 11:09:57.808670 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 11:09:57.818620 systemd[1]: Started logrotate.timer. Jul 2 11:09:57.825619 systemd[1]: Started mdadm.timer. Jul 2 11:09:57.832562 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 11:09:57.840569 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 11:09:57.840590 systemd[1]: Reached target paths.target. Jul 2 11:09:57.847629 systemd[1]: Reached target timers.target. Jul 2 11:09:57.854776 systemd[1]: Listening on dbus.socket. Jul 2 11:09:57.871172 systemd[1]: Starting docker.socket... Jul 2 11:09:57.878691 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 2 11:09:57.886187 systemd[1]: Listening on sshd.socket. Jul 2 11:09:57.893753 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 11:09:57.893989 systemd[1]: Listening on docker.socket. Jul 2 11:09:57.900720 systemd[1]: Reached target sockets.target. Jul 2 11:09:57.908644 systemd[1]: Reached target basic.target. Jul 2 11:09:57.915664 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 11:09:57.915684 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 11:09:57.916331 systemd[1]: Starting containerd.service... Jul 2 11:09:57.924328 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 11:09:57.933555 systemd[1]: Starting coreos-metadata.service... Jul 2 11:09:57.941420 systemd[1]: Starting dbus.service... Jul 2 11:09:57.947226 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 11:09:57.953904 jq[1523]: false Jul 2 11:09:57.954362 systemd[1]: Starting extend-filesystems.service... Jul 2 11:09:57.961572 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 11:09:57.961879 dbus-daemon[1522]: [system] SELinux support is enabled Jul 2 11:09:57.962483 systemd[1]: Starting motdgen.service... Jul 2 11:09:57.962821 extend-filesystems[1524]: Found loop1 Jul 2 11:09:57.962821 extend-filesystems[1524]: Found sda Jul 2 11:09:58.049605 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Jul 2 11:09:58.049626 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jul 2 11:09:58.049728 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Jul 2 11:09:58.049741 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Jul 2 11:09:58.049778 coreos-metadata[1519]: Jul 02 11:09:57.971 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 2 11:09:58.049778 coreos-metadata[1519]: Jul 02 11:09:57.971 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Jul 2 11:09:57.970521 systemd[1]: Starting prepare-helm.service... Jul 2 11:09:58.049948 coreos-metadata[1516]: Jul 02 11:09:57.965 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 2 11:09:58.049948 coreos-metadata[1516]: Jul 02 11:09:57.969 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Jul 2 11:09:58.050058 extend-filesystems[1524]: Found sda1 Jul 2 11:09:58.050058 extend-filesystems[1524]: Found sda2 Jul 2 11:09:58.050058 extend-filesystems[1524]: Found sda3 Jul 2 11:09:58.050058 extend-filesystems[1524]: Found usr Jul 2 11:09:58.050058 extend-filesystems[1524]: Found sda4 Jul 2 11:09:58.050058 extend-filesystems[1524]: Found sda6 Jul 2 11:09:58.050058 extend-filesystems[1524]: Found sda7 Jul 2 11:09:58.050058 extend-filesystems[1524]: Found sda9 Jul 2 11:09:58.050058 extend-filesystems[1524]: Checking size of /dev/sda9 Jul 2 11:09:58.050058 extend-filesystems[1524]: Resized partition /dev/sda9 Jul 2 11:09:58.251602 kernel: bond0: active interface up! Jul 2 11:09:58.251631 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex Jul 2 11:09:58.251645 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:09:58.251656 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Jul 2 11:09:58.026263 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 11:09:58.251819 extend-filesystems[1539]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 11:09:58.094172 systemd-networkd[1325]: bond0: Link UP Jul 2 11:09:58.263975 dbus-daemon[1522]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 11:09:58.094410 systemd-networkd[1325]: enp1s0f1np1: Link UP Jul 2 11:09:58.094422 systemd[1]: Starting sshd-keygen.service... Jul 2 11:09:58.269062 update_engine[1553]: I0702 11:09:58.182286 1553 main.cc:92] Flatcar Update Engine starting Jul 2 11:09:58.269062 update_engine[1553]: I0702 11:09:58.185629 1553 update_check_scheduler.cc:74] Next update check in 3m1s Jul 2 11:09:58.094570 systemd-networkd[1325]: enp1s0f1np1: Gained carrier Jul 2 11:09:58.269306 jq[1554]: true Jul 2 11:09:58.095551 systemd-networkd[1325]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:59:9f:e0:f7:a4.network. Jul 2 11:09:58.110853 systemd[1]: Starting systemd-logind.service... Jul 2 11:09:58.269558 tar[1557]: linux-amd64/helm Jul 2 11:09:58.124517 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 11:09:58.269759 jq[1559]: true Jul 2 11:09:58.125078 systemd[1]: Starting tcsd.service... Jul 2 11:09:58.133110 systemd-logind[1551]: Watching system buttons on /dev/input/event3 (Power Button) Jul 2 11:09:58.133120 systemd-logind[1551]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 2 11:09:58.133130 systemd-logind[1551]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jul 2 11:09:58.133280 systemd-logind[1551]: New seat seat0. Jul 2 11:09:58.138884 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 11:09:58.139271 systemd[1]: Starting update-engine.service... Jul 2 11:09:58.154186 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 11:09:58.175004 systemd[1]: Started dbus.service. Jul 2 11:09:58.188112 systemd-networkd[1325]: bond0: Gained carrier Jul 2 11:09:58.188304 systemd-networkd[1325]: enp1s0f0np0: Link UP Jul 2 11:09:58.188319 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Jul 2 11:09:58.188462 systemd-networkd[1325]: enp1s0f0np0: Gained carrier Jul 2 11:09:58.195315 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 11:09:58.195411 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 11:09:58.195564 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 11:09:58.195636 systemd[1]: Finished motdgen.service. Jul 2 11:09:58.202694 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Jul 2 11:09:58.202815 systemd-networkd[1325]: enp1s0f1np1: Link DOWN Jul 2 11:09:58.202817 systemd-networkd[1325]: enp1s0f1np1: Lost carrier Jul 2 11:09:58.237663 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Jul 2 11:09:58.237692 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 11:09:58.237783 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 11:09:58.237847 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Jul 2 11:09:58.268716 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jul 2 11:09:58.268835 systemd[1]: Condition check resulted in tcsd.service being skipped. Jul 2 11:09:58.271897 systemd[1]: Started systemd-logind.service. Jul 2 11:09:58.272539 env[1560]: time="2024-07-02T11:09:58.272510205Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 11:09:58.280938 env[1560]: time="2024-07-02T11:09:58.280920639Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 11:09:58.283614 systemd[1]: Started update-engine.service. Jul 2 11:09:58.283679 env[1560]: time="2024-07-02T11:09:58.283664445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 11:09:58.284367 env[1560]: time="2024-07-02T11:09:58.284351106Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 11:09:58.284402 env[1560]: time="2024-07-02T11:09:58.284366003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 11:09:58.284502 env[1560]: time="2024-07-02T11:09:58.284491320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 11:09:58.284534 env[1560]: time="2024-07-02T11:09:58.284507671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 11:09:58.284534 env[1560]: time="2024-07-02T11:09:58.284515404Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 11:09:58.284534 env[1560]: time="2024-07-02T11:09:58.284520742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 11:09:58.284607 env[1560]: time="2024-07-02T11:09:58.284567808Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 11:09:58.284737 env[1560]: time="2024-07-02T11:09:58.284722667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 11:09:58.284811 env[1560]: time="2024-07-02T11:09:58.284801433Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 11:09:58.284839 env[1560]: time="2024-07-02T11:09:58.284811351Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 11:09:58.286392 env[1560]: time="2024-07-02T11:09:58.286381419Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 11:09:58.286430 env[1560]: time="2024-07-02T11:09:58.286391695Z" level=info msg="metadata content store policy set" policy=shared Jul 2 11:09:58.292111 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:09:58.293262 systemd[1]: Started locksmithd.service. Jul 2 11:09:58.299635 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 11:09:58.299717 systemd[1]: Reached target system-config.target. Jul 2 11:09:58.301267 env[1560]: time="2024-07-02T11:09:58.301254950Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 11:09:58.301296 env[1560]: time="2024-07-02T11:09:58.301273883Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 11:09:58.301296 env[1560]: time="2024-07-02T11:09:58.301292395Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 11:09:58.301336 env[1560]: time="2024-07-02T11:09:58.301311466Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 11:09:58.301336 env[1560]: time="2024-07-02T11:09:58.301329599Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 11:09:58.301366 env[1560]: time="2024-07-02T11:09:58.301337846Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 11:09:58.301366 env[1560]: time="2024-07-02T11:09:58.301353448Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 11:09:58.301366 env[1560]: time="2024-07-02T11:09:58.301361665Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 11:09:58.301415 env[1560]: time="2024-07-02T11:09:58.301378419Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 11:09:58.301415 env[1560]: time="2024-07-02T11:09:58.301387200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 11:09:58.301415 env[1560]: time="2024-07-02T11:09:58.301394059Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 11:09:58.301415 env[1560]: time="2024-07-02T11:09:58.301408772Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 11:09:58.301475 env[1560]: time="2024-07-02T11:09:58.301465935Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 11:09:58.301497 bash[1584]: Updated "/home/core/.ssh/authorized_keys" Jul 2 11:09:58.301557 env[1560]: time="2024-07-02T11:09:58.301530240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 11:09:58.301693 env[1560]: time="2024-07-02T11:09:58.301685088Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 11:09:58.301718 env[1560]: time="2024-07-02T11:09:58.301700267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 11:09:58.301738 env[1560]: time="2024-07-02T11:09:58.301721189Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 11:09:58.301763 env[1560]: time="2024-07-02T11:09:58.301757037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 11:09:58.301781 env[1560]: time="2024-07-02T11:09:58.301765781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 11:09:58.301781 env[1560]: time="2024-07-02T11:09:58.301772547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 11:09:58.301781 env[1560]: time="2024-07-02T11:09:58.301778729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 11:09:58.301839 env[1560]: time="2024-07-02T11:09:58.301786120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 11:09:58.301839 env[1560]: time="2024-07-02T11:09:58.301793005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 11:09:58.301839 env[1560]: time="2024-07-02T11:09:58.301799272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 11:09:58.301839 env[1560]: time="2024-07-02T11:09:58.301805320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 11:09:58.301839 env[1560]: time="2024-07-02T11:09:58.301812766Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 11:09:58.301915 env[1560]: time="2024-07-02T11:09:58.301899276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 11:09:58.301933 env[1560]: time="2024-07-02T11:09:58.301908878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 11:09:58.301933 env[1560]: time="2024-07-02T11:09:58.301923305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 11:09:58.301965 env[1560]: time="2024-07-02T11:09:58.301934935Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 11:09:58.301965 env[1560]: time="2024-07-02T11:09:58.301950422Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 11:09:58.301965 env[1560]: time="2024-07-02T11:09:58.301961795Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 11:09:58.302010 env[1560]: time="2024-07-02T11:09:58.301972528Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 11:09:58.302010 env[1560]: time="2024-07-02T11:09:58.301993809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 11:09:58.302157 env[1560]: time="2024-07-02T11:09:58.302120855Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 11:09:58.304335 env[1560]: time="2024-07-02T11:09:58.302166399Z" level=info msg="Connect containerd service" Jul 2 11:09:58.304335 env[1560]: time="2024-07-02T11:09:58.302185133Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 11:09:58.304335 env[1560]: time="2024-07-02T11:09:58.302543240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 11:09:58.304335 env[1560]: time="2024-07-02T11:09:58.302635926Z" level=info msg="Start subscribing containerd event" Jul 2 11:09:58.304335 env[1560]: time="2024-07-02T11:09:58.302672470Z" level=info msg="Start recovering state" Jul 2 11:09:58.304335 env[1560]: time="2024-07-02T11:09:58.302689239Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 11:09:58.304335 env[1560]: time="2024-07-02T11:09:58.302709365Z" level=info msg="Start event monitor" Jul 2 11:09:58.304335 env[1560]: time="2024-07-02T11:09:58.302718586Z" level=info msg="Start snapshots syncer" Jul 2 11:09:58.304335 env[1560]: time="2024-07-02T11:09:58.302721095Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 11:09:58.304335 env[1560]: time="2024-07-02T11:09:58.302725506Z" level=info msg="Start cni network conf syncer for default" Jul 2 11:09:58.304335 env[1560]: time="2024-07-02T11:09:58.302750845Z" level=info msg="Start streaming server" Jul 2 11:09:58.304335 env[1560]: time="2024-07-02T11:09:58.302754156Z" level=info msg="containerd successfully booted in 0.030910s" Jul 2 11:09:58.308602 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 11:09:58.308681 systemd[1]: Reached target user-config.target. Jul 2 11:09:58.317563 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:09:58.319225 systemd[1]: Started containerd.service. Jul 2 11:09:58.325764 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 11:09:58.353566 locksmithd[1594]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 11:09:58.433488 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 2 11:09:58.452513 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Jul 2 11:09:58.452551 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Jul 2 11:09:58.470519 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Jul 2 11:09:58.471163 systemd-networkd[1325]: enp1s0f1np1: Link UP Jul 2 11:09:58.471166 systemd-networkd[1325]: enp1s0f1np1: Gained carrier Jul 2 11:09:58.507482 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Jul 2 11:09:58.512713 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Jul 2 11:09:58.512776 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Jul 2 11:09:58.512849 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Jul 2 11:09:58.512982 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Jul 2 11:09:58.519635 tar[1557]: linux-amd64/LICENSE Jul 2 11:09:58.519673 tar[1557]: linux-amd64/README.md Jul 2 11:09:58.522002 systemd[1]: Finished prepare-helm.service. Jul 2 11:09:58.562536 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Jul 2 11:09:58.590452 extend-filesystems[1539]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 2 11:09:58.590452 extend-filesystems[1539]: old_desc_blocks = 1, new_desc_blocks = 56 Jul 2 11:09:58.590452 extend-filesystems[1539]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Jul 2 11:09:58.630562 extend-filesystems[1524]: Resized filesystem in /dev/sda9 Jul 2 11:09:58.630562 extend-filesystems[1524]: Found sdb Jul 2 11:09:58.590797 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 11:09:58.590879 systemd[1]: Finished extend-filesystems.service. Jul 2 11:09:58.969578 coreos-metadata[1516]: Jul 02 11:09:58.969 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 2 11:09:58.972107 coreos-metadata[1519]: Jul 02 11:09:58.972 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 2 11:09:59.175686 sshd_keygen[1550]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 11:09:59.188177 systemd[1]: Finished sshd-keygen.service. Jul 2 11:09:59.196401 systemd[1]: Starting issuegen.service... Jul 2 11:09:59.204743 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 11:09:59.204818 systemd[1]: Finished issuegen.service. Jul 2 11:09:59.213288 systemd[1]: Starting systemd-user-sessions.service... Jul 2 11:09:59.222761 systemd[1]: Finished systemd-user-sessions.service. Jul 2 11:09:59.232177 systemd[1]: Started getty@tty1.service. Jul 2 11:09:59.240126 systemd[1]: Started serial-getty@ttyS1.service. Jul 2 11:09:59.248765 systemd[1]: Reached target getty.target. Jul 2 11:09:59.486599 systemd-networkd[1325]: bond0: Gained IPv6LL Jul 2 11:09:59.486839 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Jul 2 11:09:59.486996 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Jul 2 11:09:59.487162 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Jul 2 11:09:59.487856 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 11:09:59.498730 systemd[1]: Reached target network-online.target. Jul 2 11:09:59.508316 systemd[1]: Starting kubelet.service... Jul 2 11:10:00.178451 systemd[1]: Started kubelet.service. Jul 2 11:10:00.655581 kubelet[1626]: E0702 11:10:00.655468 1626 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 11:10:00.656777 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 11:10:00.656848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 11:10:01.650661 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Jul 2 11:10:04.259873 login[1618]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 11:10:04.267435 systemd-logind[1551]: New session 1 of user core. Jul 2 11:10:04.268140 systemd[1]: Created slice user-500.slice. Jul 2 11:10:04.268733 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 11:10:04.269485 login[1617]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 11:10:04.271551 systemd-logind[1551]: New session 2 of user core. Jul 2 11:10:04.274107 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 11:10:04.274829 systemd[1]: Starting user@500.service... Jul 2 11:10:04.276769 (systemd)[1648]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:10:04.349056 systemd[1648]: Queued start job for default target default.target. Jul 2 11:10:04.349283 systemd[1648]: Reached target paths.target. Jul 2 11:10:04.349295 systemd[1648]: Reached target sockets.target. Jul 2 11:10:04.349303 systemd[1648]: Reached target timers.target. Jul 2 11:10:04.349310 systemd[1648]: Reached target basic.target. Jul 2 11:10:04.349329 systemd[1648]: Reached target default.target. Jul 2 11:10:04.349344 systemd[1648]: Startup finished in 69ms. Jul 2 11:10:04.349391 systemd[1]: Started user@500.service. Jul 2 11:10:04.350007 systemd[1]: Started session-1.scope. Jul 2 11:10:04.350374 systemd[1]: Started session-2.scope. Jul 2 11:10:05.119824 coreos-metadata[1519]: Jul 02 11:10:05.119 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Jul 2 11:10:05.120618 coreos-metadata[1516]: Jul 02 11:10:05.119 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Jul 2 11:10:06.012706 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Jul 2 11:10:06.012879 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Jul 2 11:10:06.933101 systemd[1]: Created slice system-sshd.slice. Jul 2 11:10:06.933804 systemd[1]: Started sshd@0-145.40.90.137:22-139.178.68.195:39484.service. Jul 2 11:10:06.971487 sshd[1669]: Accepted publickey for core from 139.178.68.195 port 39484 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:10:06.972615 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:10:06.976552 systemd-logind[1551]: New session 3 of user core. Jul 2 11:10:06.977742 systemd[1]: Started session-3.scope. Jul 2 11:10:07.033852 systemd[1]: Started sshd@1-145.40.90.137:22-139.178.68.195:39492.service. Jul 2 11:10:07.060186 sshd[1674]: Accepted publickey for core from 139.178.68.195 port 39492 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:10:07.060837 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:10:07.063276 systemd-logind[1551]: New session 4 of user core. Jul 2 11:10:07.063959 systemd[1]: Started session-4.scope. Jul 2 11:10:07.113529 sshd[1674]: pam_unix(sshd:session): session closed for user core Jul 2 11:10:07.115950 systemd[1]: sshd@1-145.40.90.137:22-139.178.68.195:39492.service: Deactivated successfully. Jul 2 11:10:07.116489 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 11:10:07.117017 systemd-logind[1551]: Session 4 logged out. Waiting for processes to exit. Jul 2 11:10:07.117906 systemd[1]: Started sshd@2-145.40.90.137:22-139.178.68.195:39502.service. Jul 2 11:10:07.118659 systemd-logind[1551]: Removed session 4. Jul 2 11:10:07.119828 coreos-metadata[1519]: Jul 02 11:10:07.119 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Jul 2 11:10:07.120027 coreos-metadata[1516]: Jul 02 11:10:07.119 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Jul 2 11:10:07.143423 coreos-metadata[1516]: Jul 02 11:10:07.143 INFO Fetch successful Jul 2 11:10:07.143616 coreos-metadata[1519]: Jul 02 11:10:07.143 INFO Fetch successful Jul 2 11:10:07.146767 sshd[1680]: Accepted publickey for core from 139.178.68.195 port 39502 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:10:07.147684 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:10:07.150134 systemd-logind[1551]: New session 5 of user core. Jul 2 11:10:07.150851 systemd[1]: Started session-5.scope. Jul 2 11:10:07.168643 systemd[1]: Finished coreos-metadata.service. Jul 2 11:10:07.169282 unknown[1516]: wrote ssh authorized keys file for user: core Jul 2 11:10:07.169483 systemd[1]: Started packet-phone-home.service. Jul 2 11:10:07.174716 curl[1685]: % Total % Received % Xferd Average Speed Time Time Time Current Jul 2 11:10:07.174878 curl[1685]: Dload Upload Total Spent Left Speed Jul 2 11:10:07.180303 update-ssh-keys[1686]: Updated "/home/core/.ssh/authorized_keys" Jul 2 11:10:07.180563 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 11:10:07.180854 systemd[1]: Reached target multi-user.target. Jul 2 11:10:07.181593 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 11:10:07.185714 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 11:10:07.185789 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 11:10:07.185927 systemd[1]: Startup finished in 1.919s (kernel) + 18.551s (initrd) + 15.789s (userspace) = 36.260s. Jul 2 11:10:07.198645 sshd[1680]: pam_unix(sshd:session): session closed for user core Jul 2 11:10:07.199918 systemd[1]: sshd@2-145.40.90.137:22-139.178.68.195:39502.service: Deactivated successfully. Jul 2 11:10:07.200308 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 11:10:07.200661 systemd-logind[1551]: Session 5 logged out. Waiting for processes to exit. Jul 2 11:10:07.201078 systemd-logind[1551]: Removed session 5. Jul 2 11:10:07.370879 curl[1685]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Jul 2 11:10:07.373336 systemd[1]: packet-phone-home.service: Deactivated successfully. Jul 2 11:10:10.908653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 11:10:10.909410 systemd[1]: Stopped kubelet.service. Jul 2 11:10:10.912626 systemd[1]: Starting kubelet.service... Jul 2 11:10:11.135594 systemd[1]: Started kubelet.service. Jul 2 11:10:11.195024 kubelet[1694]: E0702 11:10:11.194907 1694 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 11:10:11.198125 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 11:10:11.198237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 11:10:17.211590 systemd[1]: Started sshd@3-145.40.90.137:22-139.178.68.195:34420.service. Jul 2 11:10:17.240128 sshd[1712]: Accepted publickey for core from 139.178.68.195 port 34420 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:10:17.240767 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:10:17.243060 systemd-logind[1551]: New session 6 of user core. Jul 2 11:10:17.243778 systemd[1]: Started session-6.scope. Jul 2 11:10:17.295176 sshd[1712]: pam_unix(sshd:session): session closed for user core Jul 2 11:10:17.296644 systemd[1]: sshd@3-145.40.90.137:22-139.178.68.195:34420.service: Deactivated successfully. Jul 2 11:10:17.296926 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 11:10:17.297244 systemd-logind[1551]: Session 6 logged out. Waiting for processes to exit. Jul 2 11:10:17.297791 systemd[1]: Started sshd@4-145.40.90.137:22-139.178.68.195:34430.service. Jul 2 11:10:17.298172 systemd-logind[1551]: Removed session 6. Jul 2 11:10:17.326108 sshd[1718]: Accepted publickey for core from 139.178.68.195 port 34430 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:10:17.326924 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:10:17.329692 systemd-logind[1551]: New session 7 of user core. Jul 2 11:10:17.330260 systemd[1]: Started session-7.scope. Jul 2 11:10:17.381285 sshd[1718]: pam_unix(sshd:session): session closed for user core Jul 2 11:10:17.382946 systemd[1]: sshd@4-145.40.90.137:22-139.178.68.195:34430.service: Deactivated successfully. Jul 2 11:10:17.383231 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 11:10:17.383475 systemd-logind[1551]: Session 7 logged out. Waiting for processes to exit. Jul 2 11:10:17.384065 systemd[1]: Started sshd@5-145.40.90.137:22-139.178.68.195:34444.service. Jul 2 11:10:17.384454 systemd-logind[1551]: Removed session 7. Jul 2 11:10:17.416897 sshd[1724]: Accepted publickey for core from 139.178.68.195 port 34444 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:10:17.417603 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:10:17.420016 systemd-logind[1551]: New session 8 of user core. Jul 2 11:10:17.420443 systemd[1]: Started session-8.scope. Jul 2 11:10:17.475472 sshd[1724]: pam_unix(sshd:session): session closed for user core Jul 2 11:10:17.480427 systemd[1]: sshd@5-145.40.90.137:22-139.178.68.195:34444.service: Deactivated successfully. Jul 2 11:10:17.481753 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 11:10:17.483212 systemd-logind[1551]: Session 8 logged out. Waiting for processes to exit. Jul 2 11:10:17.485778 systemd[1]: Started sshd@6-145.40.90.137:22-139.178.68.195:34450.service. Jul 2 11:10:17.488294 systemd-logind[1551]: Removed session 8. Jul 2 11:10:17.593965 sshd[1730]: Accepted publickey for core from 139.178.68.195 port 34450 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:10:17.595559 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:10:17.600217 systemd-logind[1551]: New session 9 of user core. Jul 2 11:10:17.601187 systemd[1]: Started session-9.scope. Jul 2 11:10:17.686032 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 11:10:17.686742 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 11:10:17.711052 systemd[1]: Starting docker.service... Jul 2 11:10:17.729517 env[1747]: time="2024-07-02T11:10:17.729441174Z" level=info msg="Starting up" Jul 2 11:10:17.730291 env[1747]: time="2024-07-02T11:10:17.730250910Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 11:10:17.730291 env[1747]: time="2024-07-02T11:10:17.730260547Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 11:10:17.730291 env[1747]: time="2024-07-02T11:10:17.730272473Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 11:10:17.730291 env[1747]: time="2024-07-02T11:10:17.730278592Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 11:10:17.731109 env[1747]: time="2024-07-02T11:10:17.731063978Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 11:10:17.731109 env[1747]: time="2024-07-02T11:10:17.731074399Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 11:10:17.731109 env[1747]: time="2024-07-02T11:10:17.731083197Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 11:10:17.731109 env[1747]: time="2024-07-02T11:10:17.731088558Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 11:10:17.743873 env[1747]: time="2024-07-02T11:10:17.743831312Z" level=info msg="Loading containers: start." Jul 2 11:10:17.920525 kernel: Initializing XFRM netlink socket Jul 2 11:10:17.954323 env[1747]: time="2024-07-02T11:10:17.954278271Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 11:10:17.955005 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Jul 2 11:10:17.997219 systemd-networkd[1325]: docker0: Link UP Jul 2 11:10:18.001683 env[1747]: time="2024-07-02T11:10:18.001667043Z" level=info msg="Loading containers: done." Jul 2 11:10:18.006966 env[1747]: time="2024-07-02T11:10:18.006922985Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 11:10:18.007042 env[1747]: time="2024-07-02T11:10:18.007006013Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 11:10:18.007066 env[1747]: time="2024-07-02T11:10:18.007051842Z" level=info msg="Daemon has completed initialization" Jul 2 11:10:18.013788 systemd[1]: Started docker.service. Jul 2 11:10:18.017652 env[1747]: time="2024-07-02T11:10:18.017600695Z" level=info msg="API listen on /run/docker.sock" Jul 2 11:10:18.155381 systemd-timesyncd[1501]: Contacted time server [2603:c024:c005:a600:efb6:d213:cad8:251d]:123 (2.flatcar.pool.ntp.org). Jul 2 11:10:18.155422 systemd-timesyncd[1501]: Initial clock synchronization to Tue 2024-07-02 11:10:18.452444 UTC. Jul 2 11:10:18.994268 env[1560]: time="2024-07-02T11:10:18.994161940Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 11:10:19.677870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2779879578.mount: Deactivated successfully. Jul 2 11:10:20.905302 env[1560]: time="2024-07-02T11:10:20.905251443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:20.905979 env[1560]: time="2024-07-02T11:10:20.905922099Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:20.907207 env[1560]: time="2024-07-02T11:10:20.907140452Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:20.908231 env[1560]: time="2024-07-02T11:10:20.908203520Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:20.908743 env[1560]: time="2024-07-02T11:10:20.908712433Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jul 2 11:10:20.919684 env[1560]: time="2024-07-02T11:10:20.919664838Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 11:10:21.322994 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 11:10:21.323890 systemd[1]: Stopped kubelet.service. Jul 2 11:10:21.327578 systemd[1]: Starting kubelet.service... Jul 2 11:10:21.561423 systemd[1]: Started kubelet.service. Jul 2 11:10:21.588207 kubelet[1925]: E0702 11:10:21.588083 1925 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 11:10:21.589318 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 11:10:21.589390 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 11:10:22.434709 env[1560]: time="2024-07-02T11:10:22.434659807Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:22.435358 env[1560]: time="2024-07-02T11:10:22.435315018Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:22.436459 env[1560]: time="2024-07-02T11:10:22.436445262Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:22.437487 env[1560]: time="2024-07-02T11:10:22.437473971Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:22.437912 env[1560]: time="2024-07-02T11:10:22.437883150Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jul 2 11:10:22.443906 env[1560]: time="2024-07-02T11:10:22.443874686Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 11:10:23.539458 env[1560]: time="2024-07-02T11:10:23.539407137Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:23.540627 env[1560]: time="2024-07-02T11:10:23.540589450Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:23.542389 env[1560]: time="2024-07-02T11:10:23.542374574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:23.543330 env[1560]: time="2024-07-02T11:10:23.543280402Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:23.543788 env[1560]: time="2024-07-02T11:10:23.543751137Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jul 2 11:10:23.550974 env[1560]: time="2024-07-02T11:10:23.550908713Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 11:10:24.447923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2889665232.mount: Deactivated successfully. Jul 2 11:10:24.809367 env[1560]: time="2024-07-02T11:10:24.809301579Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:24.809993 env[1560]: time="2024-07-02T11:10:24.809979687Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:24.810609 env[1560]: time="2024-07-02T11:10:24.810594648Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:24.811163 env[1560]: time="2024-07-02T11:10:24.811151200Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:24.811432 env[1560]: time="2024-07-02T11:10:24.811419696Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jul 2 11:10:24.818164 env[1560]: time="2024-07-02T11:10:24.818148746Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 11:10:25.419086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount652852178.mount: Deactivated successfully. Jul 2 11:10:26.107263 env[1560]: time="2024-07-02T11:10:26.107207833Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:26.107956 env[1560]: time="2024-07-02T11:10:26.107913794Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:26.108999 env[1560]: time="2024-07-02T11:10:26.108950364Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:26.110078 env[1560]: time="2024-07-02T11:10:26.110030238Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:26.111060 env[1560]: time="2024-07-02T11:10:26.111022930Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 11:10:26.116656 env[1560]: time="2024-07-02T11:10:26.116621340Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 11:10:26.633411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount82717824.mount: Deactivated successfully. Jul 2 11:10:26.635026 env[1560]: time="2024-07-02T11:10:26.634962926Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:26.635699 env[1560]: time="2024-07-02T11:10:26.635640355Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:26.636516 env[1560]: time="2024-07-02T11:10:26.636430878Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:26.637318 env[1560]: time="2024-07-02T11:10:26.637296233Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:26.638075 env[1560]: time="2024-07-02T11:10:26.638060670Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 11:10:26.644181 env[1560]: time="2024-07-02T11:10:26.644145996Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 11:10:27.208473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3380733430.mount: Deactivated successfully. Jul 2 11:10:28.931135 env[1560]: time="2024-07-02T11:10:28.931102859Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:28.931818 env[1560]: time="2024-07-02T11:10:28.931735788Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:28.933753 env[1560]: time="2024-07-02T11:10:28.933738810Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:28.934886 env[1560]: time="2024-07-02T11:10:28.934873622Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:28.935987 env[1560]: time="2024-07-02T11:10:28.935953213Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jul 2 11:10:30.441368 systemd[1]: Stopped kubelet.service. Jul 2 11:10:30.442699 systemd[1]: Starting kubelet.service... Jul 2 11:10:30.452044 systemd[1]: Reloading. Jul 2 11:10:30.487269 /usr/lib/systemd/system-generators/torcx-generator[2130]: time="2024-07-02T11:10:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 11:10:30.487285 /usr/lib/systemd/system-generators/torcx-generator[2130]: time="2024-07-02T11:10:30Z" level=info msg="torcx already run" Jul 2 11:10:30.540802 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 11:10:30.540810 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 11:10:30.552232 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 11:10:30.612435 systemd[1]: Started kubelet.service. Jul 2 11:10:30.613229 systemd[1]: Stopping kubelet.service... Jul 2 11:10:30.613372 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 11:10:30.613461 systemd[1]: Stopped kubelet.service. Jul 2 11:10:30.614182 systemd[1]: Starting kubelet.service... Jul 2 11:10:30.798527 systemd[1]: Started kubelet.service. Jul 2 11:10:30.839914 kubelet[2198]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 11:10:30.839914 kubelet[2198]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 11:10:30.839914 kubelet[2198]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 11:10:30.840244 kubelet[2198]: I0702 11:10:30.839955 2198 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 11:10:31.063364 kubelet[2198]: I0702 11:10:31.063286 2198 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 11:10:31.063364 kubelet[2198]: I0702 11:10:31.063301 2198 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 11:10:31.063463 kubelet[2198]: I0702 11:10:31.063440 2198 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 11:10:31.075433 kubelet[2198]: I0702 11:10:31.075400 2198 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 11:10:31.076482 kubelet[2198]: E0702 11:10:31.076447 2198 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://145.40.90.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 145.40.90.137:6443: connect: connection refused Jul 2 11:10:31.124238 kubelet[2198]: I0702 11:10:31.124138 2198 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 11:10:31.126383 kubelet[2198]: I0702 11:10:31.126293 2198 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 11:10:31.126874 kubelet[2198]: I0702 11:10:31.126378 2198 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.5-a-3a013adf74","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 11:10:31.126874 kubelet[2198]: I0702 11:10:31.126847 2198 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 11:10:31.126874 kubelet[2198]: I0702 11:10:31.126870 2198 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 11:10:31.127227 kubelet[2198]: I0702 11:10:31.127044 2198 state_mem.go:36] "Initialized new in-memory state store" Jul 2 11:10:31.128977 kubelet[2198]: I0702 11:10:31.128905 2198 kubelet.go:400] "Attempting to sync node with API server" Jul 2 11:10:31.129199 kubelet[2198]: I0702 11:10:31.129174 2198 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 11:10:31.129323 kubelet[2198]: I0702 11:10:31.129225 2198 kubelet.go:312] "Adding apiserver pod source" Jul 2 11:10:31.129323 kubelet[2198]: I0702 11:10:31.129255 2198 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 11:10:31.136370 kubelet[2198]: W0702 11:10:31.136258 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://145.40.90.137:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 145.40.90.137:6443: connect: connection refused Jul 2 11:10:31.136370 kubelet[2198]: E0702 11:10:31.136357 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://145.40.90.137:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 145.40.90.137:6443: connect: connection refused Jul 2 11:10:31.140516 kubelet[2198]: W0702 11:10:31.140436 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://145.40.90.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-3a013adf74&limit=500&resourceVersion=0": dial tcp 145.40.90.137:6443: connect: connection refused Jul 2 11:10:31.140516 kubelet[2198]: E0702 11:10:31.140502 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://145.40.90.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-3a013adf74&limit=500&resourceVersion=0": dial tcp 145.40.90.137:6443: connect: connection refused Jul 2 11:10:31.146163 kubelet[2198]: I0702 11:10:31.146103 2198 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 11:10:31.150842 kubelet[2198]: I0702 11:10:31.150800 2198 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 11:10:31.150910 kubelet[2198]: W0702 11:10:31.150848 2198 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 11:10:31.151335 kubelet[2198]: I0702 11:10:31.151316 2198 server.go:1264] "Started kubelet" Jul 2 11:10:31.151421 kubelet[2198]: I0702 11:10:31.151377 2198 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 11:10:31.151506 kubelet[2198]: I0702 11:10:31.151401 2198 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 11:10:31.151725 kubelet[2198]: I0702 11:10:31.151707 2198 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 11:10:31.152718 kubelet[2198]: I0702 11:10:31.152703 2198 server.go:455] "Adding debug handlers to kubelet server" Jul 2 11:10:31.161689 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 11:10:31.161756 kubelet[2198]: I0702 11:10:31.161750 2198 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 11:10:31.161845 kubelet[2198]: I0702 11:10:31.161833 2198 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 11:10:31.161887 kubelet[2198]: E0702 11:10:31.161844 2198 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-3a013adf74\" not found" Jul 2 11:10:31.161887 kubelet[2198]: I0702 11:10:31.161858 2198 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 11:10:31.161947 kubelet[2198]: I0702 11:10:31.161895 2198 reconciler.go:26] "Reconciler: start to sync state" Jul 2 11:10:31.162044 kubelet[2198]: E0702 11:10:31.162025 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-3a013adf74?timeout=10s\": dial tcp 145.40.90.137:6443: connect: connection refused" interval="200ms" Jul 2 11:10:31.162118 kubelet[2198]: W0702 11:10:31.162085 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://145.40.90.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.137:6443: connect: connection refused Jul 2 11:10:31.162118 kubelet[2198]: E0702 11:10:31.162117 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://145.40.90.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.137:6443: connect: connection refused Jul 2 11:10:31.162190 kubelet[2198]: I0702 11:10:31.162181 2198 factory.go:221] Registration of the systemd container factory successfully Jul 2 11:10:31.162241 kubelet[2198]: I0702 11:10:31.162232 2198 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 11:10:31.162494 kubelet[2198]: E0702 11:10:31.162482 2198 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 11:10:31.162725 kubelet[2198]: I0702 11:10:31.162718 2198 factory.go:221] Registration of the containerd container factory successfully Jul 2 11:10:31.162800 kubelet[2198]: E0702 11:10:31.162749 2198 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://145.40.90.137:6443/api/v1/namespaces/default/events\": dial tcp 145.40.90.137:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.5-a-3a013adf74.17de60e1ec596329 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.5-a-3a013adf74,UID:ci-3510.3.5-a-3a013adf74,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.5-a-3a013adf74,},FirstTimestamp:2024-07-02 11:10:31.151297321 +0000 UTC m=+0.350207335,LastTimestamp:2024-07-02 11:10:31.151297321 +0000 UTC m=+0.350207335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.5-a-3a013adf74,}" Jul 2 11:10:31.171072 kubelet[2198]: I0702 11:10:31.171051 2198 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 11:10:31.171629 kubelet[2198]: I0702 11:10:31.171592 2198 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 11:10:31.171629 kubelet[2198]: I0702 11:10:31.171609 2198 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 11:10:31.171629 kubelet[2198]: I0702 11:10:31.171623 2198 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 11:10:31.171756 kubelet[2198]: E0702 11:10:31.171652 2198 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 11:10:31.172166 kubelet[2198]: W0702 11:10:31.172088 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://145.40.90.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.137:6443: connect: connection refused Jul 2 11:10:31.172166 kubelet[2198]: E0702 11:10:31.172146 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://145.40.90.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.137:6443: connect: connection refused Jul 2 11:10:31.196401 kubelet[2198]: I0702 11:10:31.196391 2198 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 11:10:31.196401 kubelet[2198]: I0702 11:10:31.196398 2198 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 11:10:31.196458 kubelet[2198]: I0702 11:10:31.196407 2198 state_mem.go:36] "Initialized new in-memory state store" Jul 2 11:10:31.209195 kubelet[2198]: I0702 11:10:31.209157 2198 policy_none.go:49] "None policy: Start" Jul 2 11:10:31.209456 kubelet[2198]: I0702 11:10:31.209446 2198 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 11:10:31.209511 kubelet[2198]: I0702 11:10:31.209464 2198 state_mem.go:35] "Initializing new in-memory state store" Jul 2 11:10:31.212759 systemd[1]: Created slice kubepods.slice. Jul 2 11:10:31.215619 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 11:10:31.217374 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 11:10:31.233291 kubelet[2198]: I0702 11:10:31.233243 2198 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 11:10:31.233430 kubelet[2198]: I0702 11:10:31.233398 2198 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 11:10:31.233565 kubelet[2198]: I0702 11:10:31.233538 2198 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 11:10:31.234368 kubelet[2198]: E0702 11:10:31.234349 2198 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.5-a-3a013adf74\" not found" Jul 2 11:10:31.266611 kubelet[2198]: I0702 11:10:31.266522 2198 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-3a013adf74" Jul 2 11:10:31.267305 kubelet[2198]: E0702 11:10:31.267208 2198 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://145.40.90.137:6443/api/v1/nodes\": dial tcp 145.40.90.137:6443: connect: connection refused" node="ci-3510.3.5-a-3a013adf74" Jul 2 11:10:31.272558 kubelet[2198]: I0702 11:10:31.272423 2198 topology_manager.go:215] "Topology Admit Handler" podUID="92041b369fa866f914c0820b83970c44" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:31.275808 kubelet[2198]: I0702 11:10:31.275726 2198 topology_manager.go:215] "Topology Admit Handler" podUID="86e26c19042bbd422101ef63370ee935" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:31.279236 kubelet[2198]: I0702 11:10:31.279145 2198 topology_manager.go:215] "Topology Admit Handler" podUID="0e67a7bfbd1b2451106d599dec0f0e6b" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:31.292115 systemd[1]: Created slice kubepods-burstable-pod92041b369fa866f914c0820b83970c44.slice. Jul 2 11:10:31.329389 systemd[1]: Created slice kubepods-burstable-pod86e26c19042bbd422101ef63370ee935.slice. Jul 2 11:10:31.337310 systemd[1]: Created slice kubepods-burstable-pod0e67a7bfbd1b2451106d599dec0f0e6b.slice. Jul 2 11:10:31.362993 kubelet[2198]: E0702 11:10:31.362898 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-3a013adf74?timeout=10s\": dial tcp 145.40.90.137:6443: connect: connection refused" interval="400ms" Jul 2 11:10:31.362993 kubelet[2198]: I0702 11:10:31.362972 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92041b369fa866f914c0820b83970c44-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-a-3a013adf74\" (UID: \"92041b369fa866f914c0820b83970c44\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:31.363407 kubelet[2198]: I0702 11:10:31.363051 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/86e26c19042bbd422101ef63370ee935-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-a-3a013adf74\" (UID: \"86e26c19042bbd422101ef63370ee935\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:31.363407 kubelet[2198]: I0702 11:10:31.363118 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/86e26c19042bbd422101ef63370ee935-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-3a013adf74\" (UID: \"86e26c19042bbd422101ef63370ee935\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:31.363407 kubelet[2198]: I0702 11:10:31.363169 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/86e26c19042bbd422101ef63370ee935-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-a-3a013adf74\" (UID: \"86e26c19042bbd422101ef63370ee935\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:31.363407 kubelet[2198]: I0702 11:10:31.363215 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/86e26c19042bbd422101ef63370ee935-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-3a013adf74\" (UID: \"86e26c19042bbd422101ef63370ee935\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:31.363407 kubelet[2198]: I0702 11:10:31.363304 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86e26c19042bbd422101ef63370ee935-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-a-3a013adf74\" (UID: \"86e26c19042bbd422101ef63370ee935\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:31.363931 kubelet[2198]: I0702 11:10:31.363385 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e67a7bfbd1b2451106d599dec0f0e6b-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-a-3a013adf74\" (UID: \"0e67a7bfbd1b2451106d599dec0f0e6b\") " pod="kube-system/kube-scheduler-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:31.363931 kubelet[2198]: I0702 11:10:31.363430 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92041b369fa866f914c0820b83970c44-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-a-3a013adf74\" (UID: \"92041b369fa866f914c0820b83970c44\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:31.363931 kubelet[2198]: I0702 11:10:31.363475 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92041b369fa866f914c0820b83970c44-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-a-3a013adf74\" (UID: \"92041b369fa866f914c0820b83970c44\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:31.471271 kubelet[2198]: I0702 11:10:31.471215 2198 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-3a013adf74" Jul 2 11:10:31.472025 kubelet[2198]: E0702 11:10:31.471922 2198 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://145.40.90.137:6443/api/v1/nodes\": dial tcp 145.40.90.137:6443: connect: connection refused" node="ci-3510.3.5-a-3a013adf74" Jul 2 11:10:31.623651 env[1560]: time="2024-07-02T11:10:31.623396716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-a-3a013adf74,Uid:92041b369fa866f914c0820b83970c44,Namespace:kube-system,Attempt:0,}" Jul 2 11:10:31.635709 env[1560]: time="2024-07-02T11:10:31.635587353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-a-3a013adf74,Uid:86e26c19042bbd422101ef63370ee935,Namespace:kube-system,Attempt:0,}" Jul 2 11:10:31.642789 env[1560]: time="2024-07-02T11:10:31.642673216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-a-3a013adf74,Uid:0e67a7bfbd1b2451106d599dec0f0e6b,Namespace:kube-system,Attempt:0,}" Jul 2 11:10:31.764415 kubelet[2198]: E0702 11:10:31.764306 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-3a013adf74?timeout=10s\": dial tcp 145.40.90.137:6443: connect: connection refused" interval="800ms" Jul 2 11:10:31.877400 kubelet[2198]: I0702 11:10:31.877231 2198 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-3a013adf74" Jul 2 11:10:31.878328 kubelet[2198]: E0702 11:10:31.878023 2198 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://145.40.90.137:6443/api/v1/nodes\": dial tcp 145.40.90.137:6443: connect: connection refused" node="ci-3510.3.5-a-3a013adf74" Jul 2 11:10:32.046406 kubelet[2198]: W0702 11:10:32.046235 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://145.40.90.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.137:6443: connect: connection refused Jul 2 11:10:32.046406 kubelet[2198]: E0702 11:10:32.046397 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://145.40.90.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.137:6443: connect: connection refused Jul 2 11:10:32.193931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586546634.mount: Deactivated successfully. Jul 2 11:10:32.194813 env[1560]: time="2024-07-02T11:10:32.194767051Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:32.195770 env[1560]: time="2024-07-02T11:10:32.195718240Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:32.196469 env[1560]: time="2024-07-02T11:10:32.196423495Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:32.197106 env[1560]: time="2024-07-02T11:10:32.197063870Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:32.197423 env[1560]: time="2024-07-02T11:10:32.197389806Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:32.198284 env[1560]: time="2024-07-02T11:10:32.198244287Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:32.198713 env[1560]: time="2024-07-02T11:10:32.198674120Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:32.200281 env[1560]: time="2024-07-02T11:10:32.200240231Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:32.201960 env[1560]: time="2024-07-02T11:10:32.201901354Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:32.202778 env[1560]: time="2024-07-02T11:10:32.202738567Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:32.203142 env[1560]: time="2024-07-02T11:10:32.203100181Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:32.203494 env[1560]: time="2024-07-02T11:10:32.203458140Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:32.209938 env[1560]: time="2024-07-02T11:10:32.209906672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:10:32.209938 env[1560]: time="2024-07-02T11:10:32.209927365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:10:32.209938 env[1560]: time="2024-07-02T11:10:32.209905393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:10:32.209938 env[1560]: time="2024-07-02T11:10:32.209928927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:10:32.209938 env[1560]: time="2024-07-02T11:10:32.209934542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:10:32.209938 env[1560]: time="2024-07-02T11:10:32.209938755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:10:32.210120 env[1560]: time="2024-07-02T11:10:32.210002696Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8806fa97d2d604313b7264eed4c1ffd00a33c3e655db46c4bc55952a750a1ad pid=2259 runtime=io.containerd.runc.v2 Jul 2 11:10:32.210120 env[1560]: time="2024-07-02T11:10:32.210002506Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e178c2b4fcce1fccef387c9cafac865cf0ca71bb5a483cc9fc91e057fd52397a pid=2260 runtime=io.containerd.runc.v2 Jul 2 11:10:32.210443 env[1560]: time="2024-07-02T11:10:32.210417567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:10:32.210443 env[1560]: time="2024-07-02T11:10:32.210434579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:10:32.210443 env[1560]: time="2024-07-02T11:10:32.210441537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:10:32.210536 env[1560]: time="2024-07-02T11:10:32.210515644Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79ac60ebe6207c85dd61f259ceff1f454010ad317a1d43d5867d21bc39199e77 pid=2272 runtime=io.containerd.runc.v2 Jul 2 11:10:32.218224 systemd[1]: Started cri-containerd-79ac60ebe6207c85dd61f259ceff1f454010ad317a1d43d5867d21bc39199e77.scope. Jul 2 11:10:32.219234 systemd[1]: Started cri-containerd-e178c2b4fcce1fccef387c9cafac865cf0ca71bb5a483cc9fc91e057fd52397a.scope. Jul 2 11:10:32.220113 systemd[1]: Started cri-containerd-f8806fa97d2d604313b7264eed4c1ffd00a33c3e655db46c4bc55952a750a1ad.scope. Jul 2 11:10:32.243855 env[1560]: time="2024-07-02T11:10:32.243822875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-a-3a013adf74,Uid:86e26c19042bbd422101ef63370ee935,Namespace:kube-system,Attempt:0,} returns sandbox id \"e178c2b4fcce1fccef387c9cafac865cf0ca71bb5a483cc9fc91e057fd52397a\"" Jul 2 11:10:32.245174 env[1560]: time="2024-07-02T11:10:32.245154237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-a-3a013adf74,Uid:0e67a7bfbd1b2451106d599dec0f0e6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"79ac60ebe6207c85dd61f259ceff1f454010ad317a1d43d5867d21bc39199e77\"" Jul 2 11:10:32.245626 env[1560]: time="2024-07-02T11:10:32.245612610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-a-3a013adf74,Uid:92041b369fa866f914c0820b83970c44,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8806fa97d2d604313b7264eed4c1ffd00a33c3e655db46c4bc55952a750a1ad\"" Jul 2 11:10:32.246024 env[1560]: time="2024-07-02T11:10:32.246009125Z" level=info msg="CreateContainer within sandbox \"e178c2b4fcce1fccef387c9cafac865cf0ca71bb5a483cc9fc91e057fd52397a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 11:10:32.246108 env[1560]: time="2024-07-02T11:10:32.246095095Z" level=info msg="CreateContainer within sandbox \"79ac60ebe6207c85dd61f259ceff1f454010ad317a1d43d5867d21bc39199e77\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 11:10:32.247089 env[1560]: time="2024-07-02T11:10:32.247066317Z" level=info msg="CreateContainer within sandbox \"f8806fa97d2d604313b7264eed4c1ffd00a33c3e655db46c4bc55952a750a1ad\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 11:10:32.253830 env[1560]: time="2024-07-02T11:10:32.253785150Z" level=info msg="CreateContainer within sandbox \"e178c2b4fcce1fccef387c9cafac865cf0ca71bb5a483cc9fc91e057fd52397a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"566239df02eaabd7351f6e71f0b99c811991b8ed697093b36a3a7a50be3df7d0\"" Jul 2 11:10:32.254115 env[1560]: time="2024-07-02T11:10:32.254081248Z" level=info msg="StartContainer for \"566239df02eaabd7351f6e71f0b99c811991b8ed697093b36a3a7a50be3df7d0\"" Jul 2 11:10:32.254739 env[1560]: time="2024-07-02T11:10:32.254695923Z" level=info msg="CreateContainer within sandbox \"79ac60ebe6207c85dd61f259ceff1f454010ad317a1d43d5867d21bc39199e77\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"24fe278b119d5d94c6d941f2911790dfa6b4bd5a42b0b520589ad3198043bfe9\"" Jul 2 11:10:32.254891 env[1560]: time="2024-07-02T11:10:32.254844796Z" level=info msg="StartContainer for \"24fe278b119d5d94c6d941f2911790dfa6b4bd5a42b0b520589ad3198043bfe9\"" Jul 2 11:10:32.255093 env[1560]: time="2024-07-02T11:10:32.255046792Z" level=info msg="CreateContainer within sandbox \"f8806fa97d2d604313b7264eed4c1ffd00a33c3e655db46c4bc55952a750a1ad\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9b970a576bd46db86efcbadb78f45365eec038ed62655b36b06f971e0c1b17e0\"" Jul 2 11:10:32.255214 env[1560]: time="2024-07-02T11:10:32.255178055Z" level=info msg="StartContainer for \"9b970a576bd46db86efcbadb78f45365eec038ed62655b36b06f971e0c1b17e0\"" Jul 2 11:10:32.265343 systemd[1]: Started cri-containerd-24fe278b119d5d94c6d941f2911790dfa6b4bd5a42b0b520589ad3198043bfe9.scope. Jul 2 11:10:32.266222 systemd[1]: Started cri-containerd-566239df02eaabd7351f6e71f0b99c811991b8ed697093b36a3a7a50be3df7d0.scope. Jul 2 11:10:32.266815 systemd[1]: Started cri-containerd-9b970a576bd46db86efcbadb78f45365eec038ed62655b36b06f971e0c1b17e0.scope. Jul 2 11:10:32.291002 env[1560]: time="2024-07-02T11:10:32.290973128Z" level=info msg="StartContainer for \"9b970a576bd46db86efcbadb78f45365eec038ed62655b36b06f971e0c1b17e0\" returns successfully" Jul 2 11:10:32.291167 env[1560]: time="2024-07-02T11:10:32.291055299Z" level=info msg="StartContainer for \"24fe278b119d5d94c6d941f2911790dfa6b4bd5a42b0b520589ad3198043bfe9\" returns successfully" Jul 2 11:10:32.291167 env[1560]: time="2024-07-02T11:10:32.291102861Z" level=info msg="StartContainer for \"566239df02eaabd7351f6e71f0b99c811991b8ed697093b36a3a7a50be3df7d0\" returns successfully" Jul 2 11:10:32.680276 kubelet[2198]: I0702 11:10:32.680258 2198 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-3a013adf74" Jul 2 11:10:32.792477 kubelet[2198]: E0702 11:10:32.792458 2198 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.5-a-3a013adf74\" not found" node="ci-3510.3.5-a-3a013adf74" Jul 2 11:10:32.893199 kubelet[2198]: I0702 11:10:32.893176 2198 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.5-a-3a013adf74" Jul 2 11:10:33.130501 kubelet[2198]: I0702 11:10:33.130489 2198 apiserver.go:52] "Watching apiserver" Jul 2 11:10:33.162881 kubelet[2198]: I0702 11:10:33.162842 2198 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 11:10:33.183141 kubelet[2198]: E0702 11:10:33.183123 2198 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.5-a-3a013adf74\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:33.183141 kubelet[2198]: E0702 11:10:33.183133 2198 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.5-a-3a013adf74\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:33.183305 kubelet[2198]: E0702 11:10:33.183193 2198 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.5-a-3a013adf74\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:34.201346 kubelet[2198]: W0702 11:10:34.201287 2198 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:10:35.259937 systemd[1]: Reloading. Jul 2 11:10:35.327289 /usr/lib/systemd/system-generators/torcx-generator[2527]: time="2024-07-02T11:10:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 11:10:35.327306 /usr/lib/systemd/system-generators/torcx-generator[2527]: time="2024-07-02T11:10:35Z" level=info msg="torcx already run" Jul 2 11:10:35.384235 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 11:10:35.384244 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 11:10:35.397683 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 11:10:35.466300 systemd[1]: Stopping kubelet.service... Jul 2 11:10:35.486796 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 11:10:35.486896 systemd[1]: Stopped kubelet.service. Jul 2 11:10:35.487760 systemd[1]: Starting kubelet.service... Jul 2 11:10:35.703757 systemd[1]: Started kubelet.service. Jul 2 11:10:35.775882 kubelet[2591]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 11:10:35.775882 kubelet[2591]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 11:10:35.775882 kubelet[2591]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 11:10:35.776318 kubelet[2591]: I0702 11:10:35.775914 2591 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 11:10:35.781210 kubelet[2591]: I0702 11:10:35.781176 2591 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 11:10:35.781210 kubelet[2591]: I0702 11:10:35.781204 2591 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 11:10:35.781815 kubelet[2591]: I0702 11:10:35.781772 2591 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 11:10:35.783088 kubelet[2591]: I0702 11:10:35.783047 2591 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 11:10:35.784065 kubelet[2591]: I0702 11:10:35.784045 2591 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 11:10:35.808353 kubelet[2591]: I0702 11:10:35.808307 2591 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 11:10:35.808554 kubelet[2591]: I0702 11:10:35.808500 2591 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 11:10:35.808706 kubelet[2591]: I0702 11:10:35.808526 2591 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.5-a-3a013adf74","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 11:10:35.808706 kubelet[2591]: I0702 11:10:35.808690 2591 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 11:10:35.808706 kubelet[2591]: I0702 11:10:35.808702 2591 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 11:10:35.808904 kubelet[2591]: I0702 11:10:35.808737 2591 state_mem.go:36] "Initialized new in-memory state store" Jul 2 11:10:35.808904 kubelet[2591]: I0702 11:10:35.808810 2591 kubelet.go:400] "Attempting to sync node with API server" Jul 2 11:10:35.808904 kubelet[2591]: I0702 11:10:35.808822 2591 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 11:10:35.808904 kubelet[2591]: I0702 11:10:35.808842 2591 kubelet.go:312] "Adding apiserver pod source" Jul 2 11:10:35.808904 kubelet[2591]: I0702 11:10:35.808857 2591 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 11:10:35.810084 kubelet[2591]: I0702 11:10:35.810016 2591 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 11:10:35.810362 kubelet[2591]: I0702 11:10:35.810347 2591 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 11:10:35.810823 kubelet[2591]: I0702 11:10:35.810807 2591 server.go:1264] "Started kubelet" Jul 2 11:10:35.811259 kubelet[2591]: I0702 11:10:35.811195 2591 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 11:10:35.811517 kubelet[2591]: I0702 11:10:35.811368 2591 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 11:10:35.811640 kubelet[2591]: I0702 11:10:35.811614 2591 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 11:10:35.813526 kubelet[2591]: I0702 11:10:35.813507 2591 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 11:10:35.813622 kubelet[2591]: I0702 11:10:35.813568 2591 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 11:10:35.813622 kubelet[2591]: I0702 11:10:35.813611 2591 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 11:10:35.813920 kubelet[2591]: I0702 11:10:35.813775 2591 reconciler.go:26] "Reconciler: start to sync state" Jul 2 11:10:35.813920 kubelet[2591]: I0702 11:10:35.813799 2591 server.go:455] "Adding debug handlers to kubelet server" Jul 2 11:10:35.813920 kubelet[2591]: E0702 11:10:35.813878 2591 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 11:10:35.815633 kubelet[2591]: E0702 11:10:35.814026 2591 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-3a013adf74\" not found" Jul 2 11:10:35.815633 kubelet[2591]: I0702 11:10:35.814226 2591 factory.go:221] Registration of the systemd container factory successfully Jul 2 11:10:35.815633 kubelet[2591]: I0702 11:10:35.814354 2591 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 11:10:35.815633 kubelet[2591]: I0702 11:10:35.815233 2591 factory.go:221] Registration of the containerd container factory successfully Jul 2 11:10:35.824048 kubelet[2591]: I0702 11:10:35.824012 2591 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 11:10:35.825025 kubelet[2591]: I0702 11:10:35.825007 2591 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 11:10:35.825118 kubelet[2591]: I0702 11:10:35.825039 2591 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 11:10:35.825118 kubelet[2591]: I0702 11:10:35.825063 2591 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 11:10:35.825239 kubelet[2591]: E0702 11:10:35.825118 2591 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 11:10:35.842201 kubelet[2591]: I0702 11:10:35.842161 2591 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 11:10:35.842201 kubelet[2591]: I0702 11:10:35.842177 2591 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 11:10:35.842201 kubelet[2591]: I0702 11:10:35.842193 2591 state_mem.go:36] "Initialized new in-memory state store" Jul 2 11:10:35.842402 kubelet[2591]: I0702 11:10:35.842334 2591 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 11:10:35.842402 kubelet[2591]: I0702 11:10:35.842347 2591 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 11:10:35.842402 kubelet[2591]: I0702 11:10:35.842366 2591 policy_none.go:49] "None policy: Start" Jul 2 11:10:35.842847 kubelet[2591]: I0702 11:10:35.842806 2591 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 11:10:35.842847 kubelet[2591]: I0702 11:10:35.842826 2591 state_mem.go:35] "Initializing new in-memory state store" Jul 2 11:10:35.842961 kubelet[2591]: I0702 11:10:35.842954 2591 state_mem.go:75] "Updated machine memory state" Jul 2 11:10:35.847495 kubelet[2591]: I0702 11:10:35.847435 2591 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 11:10:35.847681 kubelet[2591]: I0702 11:10:35.847617 2591 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 11:10:35.847754 kubelet[2591]: I0702 11:10:35.847728 2591 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 11:10:35.920952 kubelet[2591]: I0702 11:10:35.920893 2591 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-3a013adf74" Jul 2 11:10:35.926018 kubelet[2591]: I0702 11:10:35.925902 2591 topology_manager.go:215] "Topology Admit Handler" podUID="92041b369fa866f914c0820b83970c44" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:35.926248 kubelet[2591]: I0702 11:10:35.926097 2591 topology_manager.go:215] "Topology Admit Handler" podUID="86e26c19042bbd422101ef63370ee935" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:35.926462 kubelet[2591]: I0702 11:10:35.926400 2591 topology_manager.go:215] "Topology Admit Handler" podUID="0e67a7bfbd1b2451106d599dec0f0e6b" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:35.933657 kubelet[2591]: I0702 11:10:35.933569 2591 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.5-a-3a013adf74" Jul 2 11:10:35.933932 kubelet[2591]: I0702 11:10:35.933720 2591 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.5-a-3a013adf74" Jul 2 11:10:35.934793 kubelet[2591]: W0702 11:10:35.934734 2591 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:10:35.936147 kubelet[2591]: W0702 11:10:35.936103 2591 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:10:35.937209 kubelet[2591]: W0702 11:10:35.937125 2591 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:10:35.937439 kubelet[2591]: E0702 11:10:35.937278 2591 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.5-a-3a013adf74\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:36.015261 kubelet[2591]: I0702 11:10:36.015071 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92041b369fa866f914c0820b83970c44-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-a-3a013adf74\" (UID: \"92041b369fa866f914c0820b83970c44\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:36.015261 kubelet[2591]: I0702 11:10:36.015165 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/86e26c19042bbd422101ef63370ee935-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-a-3a013adf74\" (UID: \"86e26c19042bbd422101ef63370ee935\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:36.015261 kubelet[2591]: I0702 11:10:36.015223 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/86e26c19042bbd422101ef63370ee935-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-3a013adf74\" (UID: \"86e26c19042bbd422101ef63370ee935\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:36.015862 kubelet[2591]: I0702 11:10:36.015278 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/86e26c19042bbd422101ef63370ee935-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-a-3a013adf74\" (UID: \"86e26c19042bbd422101ef63370ee935\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:36.015862 kubelet[2591]: I0702 11:10:36.015335 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e67a7bfbd1b2451106d599dec0f0e6b-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-a-3a013adf74\" (UID: \"0e67a7bfbd1b2451106d599dec0f0e6b\") " pod="kube-system/kube-scheduler-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:36.015862 kubelet[2591]: I0702 11:10:36.015382 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92041b369fa866f914c0820b83970c44-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-a-3a013adf74\" (UID: \"92041b369fa866f914c0820b83970c44\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:36.015862 kubelet[2591]: I0702 11:10:36.015445 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92041b369fa866f914c0820b83970c44-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-a-3a013adf74\" (UID: \"92041b369fa866f914c0820b83970c44\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:36.015862 kubelet[2591]: I0702 11:10:36.015513 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/86e26c19042bbd422101ef63370ee935-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-3a013adf74\" (UID: \"86e26c19042bbd422101ef63370ee935\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:36.016446 kubelet[2591]: I0702 11:10:36.015562 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86e26c19042bbd422101ef63370ee935-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-a-3a013adf74\" (UID: \"86e26c19042bbd422101ef63370ee935\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:36.256986 sudo[2635]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 11:10:36.257114 sudo[2635]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 11:10:36.587059 sudo[2635]: pam_unix(sudo:session): session closed for user root Jul 2 11:10:36.810666 kubelet[2591]: I0702 11:10:36.810606 2591 apiserver.go:52] "Watching apiserver" Jul 2 11:10:36.814169 kubelet[2591]: I0702 11:10:36.814158 2591 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 11:10:36.834046 kubelet[2591]: W0702 11:10:36.834022 2591 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:10:36.834153 kubelet[2591]: W0702 11:10:36.834053 2591 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:10:36.834153 kubelet[2591]: W0702 11:10:36.834074 2591 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:10:36.834153 kubelet[2591]: E0702 11:10:36.834079 2591 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.5-a-3a013adf74\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:36.834153 kubelet[2591]: E0702 11:10:36.834105 2591 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.5-a-3a013adf74\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:36.834153 kubelet[2591]: E0702 11:10:36.834076 2591 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.5-a-3a013adf74\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.5-a-3a013adf74" Jul 2 11:10:36.842228 kubelet[2591]: I0702 11:10:36.842147 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.5-a-3a013adf74" podStartSLOduration=1.842137441 podStartE2EDuration="1.842137441s" podCreationTimestamp="2024-07-02 11:10:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:10:36.842002522 +0000 UTC m=+1.131572001" watchObservedRunningTime="2024-07-02 11:10:36.842137441 +0000 UTC m=+1.131706915" Jul 2 11:10:36.846257 kubelet[2591]: I0702 11:10:36.846236 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3a013adf74" podStartSLOduration=1.846226277 podStartE2EDuration="1.846226277s" podCreationTimestamp="2024-07-02 11:10:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:10:36.846157601 +0000 UTC m=+1.135727079" watchObservedRunningTime="2024-07-02 11:10:36.846226277 +0000 UTC m=+1.135795752" Jul 2 11:10:36.854939 kubelet[2591]: I0702 11:10:36.854886 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.5-a-3a013adf74" podStartSLOduration=2.854881301 podStartE2EDuration="2.854881301s" podCreationTimestamp="2024-07-02 11:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:10:36.850356267 +0000 UTC m=+1.139925745" watchObservedRunningTime="2024-07-02 11:10:36.854881301 +0000 UTC m=+1.144450781" Jul 2 11:10:37.807723 sudo[1733]: pam_unix(sudo:session): session closed for user root Jul 2 11:10:37.810763 sshd[1730]: pam_unix(sshd:session): session closed for user core Jul 2 11:10:37.816847 systemd[1]: sshd@6-145.40.90.137:22-139.178.68.195:34450.service: Deactivated successfully. Jul 2 11:10:37.818673 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 11:10:37.819059 systemd[1]: session-9.scope: Consumed 3.024s CPU time. Jul 2 11:10:37.820417 systemd-logind[1551]: Session 9 logged out. Waiting for processes to exit. Jul 2 11:10:37.822824 systemd-logind[1551]: Removed session 9. Jul 2 11:10:43.932101 update_engine[1553]: I0702 11:10:43.931982 1553 update_attempter.cc:509] Updating boot flags... Jul 2 11:10:51.526842 kubelet[2591]: I0702 11:10:51.526776 2591 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 11:10:51.527318 env[1560]: time="2024-07-02T11:10:51.527160045Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 11:10:51.527598 kubelet[2591]: I0702 11:10:51.527353 2591 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 11:10:52.222626 kubelet[2591]: I0702 11:10:52.222599 2591 topology_manager.go:215] "Topology Admit Handler" podUID="50e03c15-bf2e-454b-b009-147dc20c419e" podNamespace="kube-system" podName="kube-proxy-s9fpn" Jul 2 11:10:52.226923 systemd[1]: Created slice kubepods-besteffort-pod50e03c15_bf2e_454b_b009_147dc20c419e.slice. Jul 2 11:10:52.228123 kubelet[2591]: I0702 11:10:52.228103 2591 topology_manager.go:215] "Topology Admit Handler" podUID="2a2a4868-17ec-4ab6-a62f-61e95c751c96" podNamespace="kube-system" podName="cilium-tks2g" Jul 2 11:10:52.243521 systemd[1]: Created slice kubepods-burstable-pod2a2a4868_17ec_4ab6_a62f_61e95c751c96.slice. Jul 2 11:10:52.326888 kubelet[2591]: I0702 11:10:52.326768 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50e03c15-bf2e-454b-b009-147dc20c419e-xtables-lock\") pod \"kube-proxy-s9fpn\" (UID: \"50e03c15-bf2e-454b-b009-147dc20c419e\") " pod="kube-system/kube-proxy-s9fpn" Jul 2 11:10:52.326888 kubelet[2591]: I0702 11:10:52.326880 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50e03c15-bf2e-454b-b009-147dc20c419e-lib-modules\") pod \"kube-proxy-s9fpn\" (UID: \"50e03c15-bf2e-454b-b009-147dc20c419e\") " pod="kube-system/kube-proxy-s9fpn" Jul 2 11:10:52.327300 kubelet[2591]: I0702 11:10:52.326933 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-cilium-run\") pod \"cilium-tks2g\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " pod="kube-system/cilium-tks2g" Jul 2 11:10:52.327300 kubelet[2591]: I0702 11:10:52.326985 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-host-proc-sys-kernel\") pod \"cilium-tks2g\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " pod="kube-system/cilium-tks2g" Jul 2 11:10:52.327300 kubelet[2591]: I0702 11:10:52.327036 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8rpk\" (UniqueName: \"kubernetes.io/projected/2a2a4868-17ec-4ab6-a62f-61e95c751c96-kube-api-access-t8rpk\") pod \"cilium-tks2g\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " pod="kube-system/cilium-tks2g" Jul 2 11:10:52.327300 kubelet[2591]: I0702 11:10:52.327083 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/50e03c15-bf2e-454b-b009-147dc20c419e-kube-proxy\") pod \"kube-proxy-s9fpn\" (UID: \"50e03c15-bf2e-454b-b009-147dc20c419e\") " pod="kube-system/kube-proxy-s9fpn" Jul 2 11:10:52.327300 kubelet[2591]: I0702 11:10:52.327211 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-xtables-lock\") pod \"cilium-tks2g\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " pod="kube-system/cilium-tks2g" Jul 2 11:10:52.327867 kubelet[2591]: I0702 11:10:52.327311 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a2a4868-17ec-4ab6-a62f-61e95c751c96-cilium-config-path\") pod \"cilium-tks2g\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " pod="kube-system/cilium-tks2g" Jul 2 11:10:52.327867 kubelet[2591]: I0702 11:10:52.327386 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a2a4868-17ec-4ab6-a62f-61e95c751c96-hubble-tls\") pod \"cilium-tks2g\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " pod="kube-system/cilium-tks2g" Jul 2 11:10:52.327867 kubelet[2591]: I0702 11:10:52.327463 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-cilium-cgroup\") pod \"cilium-tks2g\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " pod="kube-system/cilium-tks2g" Jul 2 11:10:52.327867 kubelet[2591]: I0702 11:10:52.327569 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-cni-path\") pod \"cilium-tks2g\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " pod="kube-system/cilium-tks2g" Jul 2 11:10:52.327867 kubelet[2591]: I0702 11:10:52.327632 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-etc-cni-netd\") pod \"cilium-tks2g\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " pod="kube-system/cilium-tks2g" Jul 2 11:10:52.327867 kubelet[2591]: I0702 11:10:52.327688 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-host-proc-sys-net\") pod \"cilium-tks2g\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " pod="kube-system/cilium-tks2g" Jul 2 11:10:52.328497 kubelet[2591]: I0702 11:10:52.327734 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-bpf-maps\") pod \"cilium-tks2g\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " pod="kube-system/cilium-tks2g" Jul 2 11:10:52.328497 kubelet[2591]: I0702 11:10:52.327794 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-lib-modules\") pod \"cilium-tks2g\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " pod="kube-system/cilium-tks2g" Jul 2 11:10:52.328497 kubelet[2591]: I0702 11:10:52.327849 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4p8h\" (UniqueName: \"kubernetes.io/projected/50e03c15-bf2e-454b-b009-147dc20c419e-kube-api-access-j4p8h\") pod \"kube-proxy-s9fpn\" (UID: \"50e03c15-bf2e-454b-b009-147dc20c419e\") " pod="kube-system/kube-proxy-s9fpn" Jul 2 11:10:52.328497 kubelet[2591]: I0702 11:10:52.327909 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-hostproc\") pod \"cilium-tks2g\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " pod="kube-system/cilium-tks2g" Jul 2 11:10:52.328497 kubelet[2591]: I0702 11:10:52.327956 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a2a4868-17ec-4ab6-a62f-61e95c751c96-clustermesh-secrets\") pod \"cilium-tks2g\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " pod="kube-system/cilium-tks2g" Jul 2 11:10:52.540815 env[1560]: time="2024-07-02T11:10:52.540586386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s9fpn,Uid:50e03c15-bf2e-454b-b009-147dc20c419e,Namespace:kube-system,Attempt:0,}" Jul 2 11:10:52.547204 env[1560]: time="2024-07-02T11:10:52.547048363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tks2g,Uid:2a2a4868-17ec-4ab6-a62f-61e95c751c96,Namespace:kube-system,Attempt:0,}" Jul 2 11:10:52.568172 env[1560]: time="2024-07-02T11:10:52.568050144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:10:52.568172 env[1560]: time="2024-07-02T11:10:52.568122062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:10:52.568172 env[1560]: time="2024-07-02T11:10:52.568151446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:10:52.568700 env[1560]: time="2024-07-02T11:10:52.568566337Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c3d1ba9a5c84dae544bd7ce23730ed2b8d26b5c5381e31623b48f190954c85e pid=2762 runtime=io.containerd.runc.v2 Jul 2 11:10:52.571782 env[1560]: time="2024-07-02T11:10:52.571683433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:10:52.571782 env[1560]: time="2024-07-02T11:10:52.571765925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:10:52.572101 env[1560]: time="2024-07-02T11:10:52.571799608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:10:52.572230 env[1560]: time="2024-07-02T11:10:52.572156607Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2 pid=2773 runtime=io.containerd.runc.v2 Jul 2 11:10:52.585302 systemd[1]: Started cri-containerd-5c3d1ba9a5c84dae544bd7ce23730ed2b8d26b5c5381e31623b48f190954c85e.scope. Jul 2 11:10:52.586553 systemd[1]: Started cri-containerd-c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2.scope. Jul 2 11:10:52.600031 env[1560]: time="2024-07-02T11:10:52.599993550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tks2g,Uid:2a2a4868-17ec-4ab6-a62f-61e95c751c96,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\"" Jul 2 11:10:52.600183 env[1560]: time="2024-07-02T11:10:52.600099023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s9fpn,Uid:50e03c15-bf2e-454b-b009-147dc20c419e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c3d1ba9a5c84dae544bd7ce23730ed2b8d26b5c5381e31623b48f190954c85e\"" Jul 2 11:10:52.601079 env[1560]: time="2024-07-02T11:10:52.601064564Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 11:10:52.601731 env[1560]: time="2024-07-02T11:10:52.601689907Z" level=info msg="CreateContainer within sandbox \"5c3d1ba9a5c84dae544bd7ce23730ed2b8d26b5c5381e31623b48f190954c85e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 11:10:52.607337 env[1560]: time="2024-07-02T11:10:52.607293943Z" level=info msg="CreateContainer within sandbox \"5c3d1ba9a5c84dae544bd7ce23730ed2b8d26b5c5381e31623b48f190954c85e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d8aa0253cbcc9ae8509fa0f702f4a6324fb2d97dbb98eed8b2f6663d633ef628\"" Jul 2 11:10:52.607547 env[1560]: time="2024-07-02T11:10:52.607532371Z" level=info msg="StartContainer for \"d8aa0253cbcc9ae8509fa0f702f4a6324fb2d97dbb98eed8b2f6663d633ef628\"" Jul 2 11:10:52.616550 systemd[1]: Started cri-containerd-d8aa0253cbcc9ae8509fa0f702f4a6324fb2d97dbb98eed8b2f6663d633ef628.scope. Jul 2 11:10:52.632492 env[1560]: time="2024-07-02T11:10:52.632457006Z" level=info msg="StartContainer for \"d8aa0253cbcc9ae8509fa0f702f4a6324fb2d97dbb98eed8b2f6663d633ef628\" returns successfully" Jul 2 11:10:52.666593 kubelet[2591]: I0702 11:10:52.666562 2591 topology_manager.go:215] "Topology Admit Handler" podUID="9490ced8-96d8-469f-a696-8effe9f4ecfd" podNamespace="kube-system" podName="cilium-operator-599987898-mvj2w" Jul 2 11:10:52.669788 systemd[1]: Created slice kubepods-besteffort-pod9490ced8_96d8_469f_a696_8effe9f4ecfd.slice. Jul 2 11:10:52.731555 kubelet[2591]: I0702 11:10:52.731482 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9490ced8-96d8-469f-a696-8effe9f4ecfd-cilium-config-path\") pod \"cilium-operator-599987898-mvj2w\" (UID: \"9490ced8-96d8-469f-a696-8effe9f4ecfd\") " pod="kube-system/cilium-operator-599987898-mvj2w" Jul 2 11:10:52.731555 kubelet[2591]: I0702 11:10:52.731543 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28nd2\" (UniqueName: \"kubernetes.io/projected/9490ced8-96d8-469f-a696-8effe9f4ecfd-kube-api-access-28nd2\") pod \"cilium-operator-599987898-mvj2w\" (UID: \"9490ced8-96d8-469f-a696-8effe9f4ecfd\") " pod="kube-system/cilium-operator-599987898-mvj2w" Jul 2 11:10:52.888535 kubelet[2591]: I0702 11:10:52.888384 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s9fpn" podStartSLOduration=0.888347377 podStartE2EDuration="888.347377ms" podCreationTimestamp="2024-07-02 11:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:10:52.88808266 +0000 UTC m=+17.177652265" watchObservedRunningTime="2024-07-02 11:10:52.888347377 +0000 UTC m=+17.177916900" Jul 2 11:10:52.971810 env[1560]: time="2024-07-02T11:10:52.971779236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-mvj2w,Uid:9490ced8-96d8-469f-a696-8effe9f4ecfd,Namespace:kube-system,Attempt:0,}" Jul 2 11:10:52.996546 env[1560]: time="2024-07-02T11:10:52.996489331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:10:52.996546 env[1560]: time="2024-07-02T11:10:52.996526408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:10:52.996546 env[1560]: time="2024-07-02T11:10:52.996542234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:10:52.996728 env[1560]: time="2024-07-02T11:10:52.996653612Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe pid=2998 runtime=io.containerd.runc.v2 Jul 2 11:10:53.005298 systemd[1]: Started cri-containerd-5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe.scope. Jul 2 11:10:53.040935 env[1560]: time="2024-07-02T11:10:53.040870900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-mvj2w,Uid:9490ced8-96d8-469f-a696-8effe9f4ecfd,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe\"" Jul 2 11:10:55.918652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4231581769.mount: Deactivated successfully. Jul 2 11:10:57.620286 env[1560]: time="2024-07-02T11:10:57.620217388Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:57.621236 env[1560]: time="2024-07-02T11:10:57.621207660Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:57.622169 env[1560]: time="2024-07-02T11:10:57.622158456Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:57.622553 env[1560]: time="2024-07-02T11:10:57.622537040Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 11:10:57.623435 env[1560]: time="2024-07-02T11:10:57.623416937Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 11:10:57.624104 env[1560]: time="2024-07-02T11:10:57.624090393Z" level=info msg="CreateContainer within sandbox \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 11:10:57.628774 env[1560]: time="2024-07-02T11:10:57.628726654Z" level=info msg="CreateContainer within sandbox \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c\"" Jul 2 11:10:57.629020 env[1560]: time="2024-07-02T11:10:57.629004584Z" level=info msg="StartContainer for \"ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c\"" Jul 2 11:10:57.630045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1169585514.mount: Deactivated successfully. Jul 2 11:10:57.638572 systemd[1]: Started cri-containerd-ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c.scope. Jul 2 11:10:57.649972 env[1560]: time="2024-07-02T11:10:57.649927360Z" level=info msg="StartContainer for \"ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c\" returns successfully" Jul 2 11:10:57.655717 systemd[1]: cri-containerd-ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c.scope: Deactivated successfully. Jul 2 11:10:58.633158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c-rootfs.mount: Deactivated successfully. Jul 2 11:10:58.769063 env[1560]: time="2024-07-02T11:10:58.768924914Z" level=info msg="shim disconnected" id=ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c Jul 2 11:10:58.769063 env[1560]: time="2024-07-02T11:10:58.769023195Z" level=warning msg="cleaning up after shim disconnected" id=ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c namespace=k8s.io Jul 2 11:10:58.769063 env[1560]: time="2024-07-02T11:10:58.769050128Z" level=info msg="cleaning up dead shim" Jul 2 11:10:58.781268 env[1560]: time="2024-07-02T11:10:58.781220954Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:10:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3093 runtime=io.containerd.runc.v2\n" Jul 2 11:10:58.901600 env[1560]: time="2024-07-02T11:10:58.901310531Z" level=info msg="CreateContainer within sandbox \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 11:10:58.915739 env[1560]: time="2024-07-02T11:10:58.915607482Z" level=info msg="CreateContainer within sandbox \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06\"" Jul 2 11:10:58.916591 env[1560]: time="2024-07-02T11:10:58.916475316Z" level=info msg="StartContainer for \"aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06\"" Jul 2 11:10:58.946007 systemd[1]: Started cri-containerd-aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06.scope. Jul 2 11:10:58.979170 env[1560]: time="2024-07-02T11:10:58.979111120Z" level=info msg="StartContainer for \"aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06\" returns successfully" Jul 2 11:10:59.002439 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 11:10:59.002928 systemd[1]: Stopped systemd-sysctl.service. Jul 2 11:10:59.003214 systemd[1]: Stopping systemd-sysctl.service... Jul 2 11:10:59.005925 systemd[1]: Starting systemd-sysctl.service... Jul 2 11:10:59.006505 systemd[1]: cri-containerd-aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06.scope: Deactivated successfully. Jul 2 11:10:59.018882 systemd[1]: Finished systemd-sysctl.service. Jul 2 11:10:59.052559 env[1560]: time="2024-07-02T11:10:59.052434860Z" level=info msg="shim disconnected" id=aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06 Jul 2 11:10:59.052895 env[1560]: time="2024-07-02T11:10:59.052558642Z" level=warning msg="cleaning up after shim disconnected" id=aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06 namespace=k8s.io Jul 2 11:10:59.052895 env[1560]: time="2024-07-02T11:10:59.052591081Z" level=info msg="cleaning up dead shim" Jul 2 11:10:59.067899 env[1560]: time="2024-07-02T11:10:59.067811428Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:10:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3158 runtime=io.containerd.runc.v2\n" Jul 2 11:10:59.628553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06-rootfs.mount: Deactivated successfully. Jul 2 11:10:59.820794 env[1560]: time="2024-07-02T11:10:59.820769980Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:59.821313 env[1560]: time="2024-07-02T11:10:59.821270818Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:59.822152 env[1560]: time="2024-07-02T11:10:59.822111719Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:10:59.822499 env[1560]: time="2024-07-02T11:10:59.822457086Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 11:10:59.824460 env[1560]: time="2024-07-02T11:10:59.824442687Z" level=info msg="CreateContainer within sandbox \"5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 11:10:59.829389 env[1560]: time="2024-07-02T11:10:59.829371955Z" level=info msg="CreateContainer within sandbox \"5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5\"" Jul 2 11:10:59.829753 env[1560]: time="2024-07-02T11:10:59.829703865Z" level=info msg="StartContainer for \"d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5\"" Jul 2 11:10:59.839450 systemd[1]: Started cri-containerd-d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5.scope. Jul 2 11:10:59.851710 env[1560]: time="2024-07-02T11:10:59.851653717Z" level=info msg="StartContainer for \"d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5\" returns successfully" Jul 2 11:10:59.901434 env[1560]: time="2024-07-02T11:10:59.901370754Z" level=info msg="CreateContainer within sandbox \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 11:10:59.921441 kubelet[2591]: I0702 11:10:59.921376 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-mvj2w" podStartSLOduration=1.139650447 podStartE2EDuration="7.921361398s" podCreationTimestamp="2024-07-02 11:10:52 +0000 UTC" firstStartedPulling="2024-07-02 11:10:53.041675515 +0000 UTC m=+17.331244997" lastFinishedPulling="2024-07-02 11:10:59.823386471 +0000 UTC m=+24.112955948" observedRunningTime="2024-07-02 11:10:59.921219845 +0000 UTC m=+24.210789322" watchObservedRunningTime="2024-07-02 11:10:59.921361398 +0000 UTC m=+24.210930871" Jul 2 11:10:59.925281 env[1560]: time="2024-07-02T11:10:59.925253890Z" level=info msg="CreateContainer within sandbox \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7\"" Jul 2 11:10:59.925650 env[1560]: time="2024-07-02T11:10:59.925599982Z" level=info msg="StartContainer for \"5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7\"" Jul 2 11:10:59.933813 systemd[1]: Started cri-containerd-5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7.scope. Jul 2 11:10:59.947790 env[1560]: time="2024-07-02T11:10:59.947735802Z" level=info msg="StartContainer for \"5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7\" returns successfully" Jul 2 11:10:59.949190 systemd[1]: cri-containerd-5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7.scope: Deactivated successfully. Jul 2 11:11:00.103865 env[1560]: time="2024-07-02T11:11:00.103725367Z" level=info msg="shim disconnected" id=5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7 Jul 2 11:11:00.103865 env[1560]: time="2024-07-02T11:11:00.103830553Z" level=warning msg="cleaning up after shim disconnected" id=5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7 namespace=k8s.io Jul 2 11:11:00.103865 env[1560]: time="2024-07-02T11:11:00.103870744Z" level=info msg="cleaning up dead shim" Jul 2 11:11:00.118567 env[1560]: time="2024-07-02T11:11:00.118501737Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:11:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3265 runtime=io.containerd.runc.v2\n" Jul 2 11:11:00.914758 env[1560]: time="2024-07-02T11:11:00.914630353Z" level=info msg="CreateContainer within sandbox \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 11:11:00.930153 env[1560]: time="2024-07-02T11:11:00.930034840Z" level=info msg="CreateContainer within sandbox \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc\"" Jul 2 11:11:00.931012 env[1560]: time="2024-07-02T11:11:00.930965861Z" level=info msg="StartContainer for \"c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc\"" Jul 2 11:11:00.939919 systemd[1]: Started cri-containerd-c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc.scope. Jul 2 11:11:00.950722 env[1560]: time="2024-07-02T11:11:00.950667361Z" level=info msg="StartContainer for \"c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc\" returns successfully" Jul 2 11:11:00.950910 systemd[1]: cri-containerd-c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc.scope: Deactivated successfully. Jul 2 11:11:00.959828 env[1560]: time="2024-07-02T11:11:00.959772497Z" level=info msg="shim disconnected" id=c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc Jul 2 11:11:00.959828 env[1560]: time="2024-07-02T11:11:00.959800712Z" level=warning msg="cleaning up after shim disconnected" id=c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc namespace=k8s.io Jul 2 11:11:00.959828 env[1560]: time="2024-07-02T11:11:00.959806826Z" level=info msg="cleaning up dead shim" Jul 2 11:11:00.963338 env[1560]: time="2024-07-02T11:11:00.963321347Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:11:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3317 runtime=io.containerd.runc.v2\n" Jul 2 11:11:01.632828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc-rootfs.mount: Deactivated successfully. Jul 2 11:11:01.925025 env[1560]: time="2024-07-02T11:11:01.924914462Z" level=info msg="CreateContainer within sandbox \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 11:11:01.938158 env[1560]: time="2024-07-02T11:11:01.938057690Z" level=info msg="CreateContainer within sandbox \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab\"" Jul 2 11:11:01.938824 env[1560]: time="2024-07-02T11:11:01.938762083Z" level=info msg="StartContainer for \"9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab\"" Jul 2 11:11:01.965426 systemd[1]: Started cri-containerd-9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab.scope. Jul 2 11:11:02.005385 env[1560]: time="2024-07-02T11:11:02.005339573Z" level=info msg="StartContainer for \"9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab\" returns successfully" Jul 2 11:11:02.080533 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 11:11:02.110138 kubelet[2591]: I0702 11:11:02.110123 2591 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 11:11:02.120084 kubelet[2591]: I0702 11:11:02.120033 2591 topology_manager.go:215] "Topology Admit Handler" podUID="ba735ef7-9004-44cb-95b7-8072e7112e69" podNamespace="kube-system" podName="coredns-7db6d8ff4d-q8sd7" Jul 2 11:11:02.120240 kubelet[2591]: I0702 11:11:02.120230 2591 topology_manager.go:215] "Topology Admit Handler" podUID="8f66db10-f16f-4afb-817c-850ce6633381" podNamespace="kube-system" podName="coredns-7db6d8ff4d-d7m5l" Jul 2 11:11:02.123212 systemd[1]: Created slice kubepods-burstable-pod8f66db10_f16f_4afb_817c_850ce6633381.slice. Jul 2 11:11:02.125323 systemd[1]: Created slice kubepods-burstable-podba735ef7_9004_44cb_95b7_8072e7112e69.slice. Jul 2 11:11:02.202740 kubelet[2591]: I0702 11:11:02.202665 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba735ef7-9004-44cb-95b7-8072e7112e69-config-volume\") pod \"coredns-7db6d8ff4d-q8sd7\" (UID: \"ba735ef7-9004-44cb-95b7-8072e7112e69\") " pod="kube-system/coredns-7db6d8ff4d-q8sd7" Jul 2 11:11:02.202740 kubelet[2591]: I0702 11:11:02.202691 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2qvv\" (UniqueName: \"kubernetes.io/projected/ba735ef7-9004-44cb-95b7-8072e7112e69-kube-api-access-c2qvv\") pod \"coredns-7db6d8ff4d-q8sd7\" (UID: \"ba735ef7-9004-44cb-95b7-8072e7112e69\") " pod="kube-system/coredns-7db6d8ff4d-q8sd7" Jul 2 11:11:02.202740 kubelet[2591]: I0702 11:11:02.202703 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f66db10-f16f-4afb-817c-850ce6633381-config-volume\") pod \"coredns-7db6d8ff4d-d7m5l\" (UID: \"8f66db10-f16f-4afb-817c-850ce6633381\") " pod="kube-system/coredns-7db6d8ff4d-d7m5l" Jul 2 11:11:02.202740 kubelet[2591]: I0702 11:11:02.202713 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psqsv\" (UniqueName: \"kubernetes.io/projected/8f66db10-f16f-4afb-817c-850ce6633381-kube-api-access-psqsv\") pod \"coredns-7db6d8ff4d-d7m5l\" (UID: \"8f66db10-f16f-4afb-817c-850ce6633381\") " pod="kube-system/coredns-7db6d8ff4d-d7m5l" Jul 2 11:11:02.226550 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 11:11:02.426636 env[1560]: time="2024-07-02T11:11:02.426453236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d7m5l,Uid:8f66db10-f16f-4afb-817c-850ce6633381,Namespace:kube-system,Attempt:0,}" Jul 2 11:11:02.427695 env[1560]: time="2024-07-02T11:11:02.427609908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8sd7,Uid:ba735ef7-9004-44cb-95b7-8072e7112e69,Namespace:kube-system,Attempt:0,}" Jul 2 11:11:02.940112 kubelet[2591]: I0702 11:11:02.940016 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tks2g" podStartSLOduration=5.91750655 podStartE2EDuration="10.939999262s" podCreationTimestamp="2024-07-02 11:10:52 +0000 UTC" firstStartedPulling="2024-07-02 11:10:52.600834975 +0000 UTC m=+16.890404454" lastFinishedPulling="2024-07-02 11:10:57.623327693 +0000 UTC m=+21.912897166" observedRunningTime="2024-07-02 11:11:02.939683521 +0000 UTC m=+27.229252998" watchObservedRunningTime="2024-07-02 11:11:02.939999262 +0000 UTC m=+27.229568735" Jul 2 11:11:03.818459 systemd-networkd[1325]: cilium_host: Link UP Jul 2 11:11:03.818553 systemd-networkd[1325]: cilium_net: Link UP Jul 2 11:11:03.825743 systemd-networkd[1325]: cilium_net: Gained carrier Jul 2 11:11:03.832887 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 11:11:03.832942 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 11:11:03.832976 systemd-networkd[1325]: cilium_host: Gained carrier Jul 2 11:11:03.879167 systemd-networkd[1325]: cilium_vxlan: Link UP Jul 2 11:11:03.879170 systemd-networkd[1325]: cilium_vxlan: Gained carrier Jul 2 11:11:04.013551 kernel: NET: Registered PF_ALG protocol family Jul 2 11:11:04.382627 systemd-networkd[1325]: cilium_net: Gained IPv6LL Jul 2 11:11:04.446668 systemd-networkd[1325]: cilium_host: Gained IPv6LL Jul 2 11:11:04.491606 systemd-networkd[1325]: lxc_health: Link UP Jul 2 11:11:04.513438 systemd-networkd[1325]: lxc_health: Gained carrier Jul 2 11:11:04.513632 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 11:11:04.975240 systemd-networkd[1325]: lxc3ee6650706f7: Link UP Jul 2 11:11:05.023495 kernel: eth0: renamed from tmpde379 Jul 2 11:11:05.038699 kernel: eth0: renamed from tmp7e745 Jul 2 11:11:05.053537 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 11:11:05.053591 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3bb3611b6c00: link becomes ready Jul 2 11:11:05.061528 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 11:11:05.075407 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3ee6650706f7: link becomes ready Jul 2 11:11:05.075458 systemd-networkd[1325]: lxc3bb3611b6c00: Link UP Jul 2 11:11:05.075868 systemd-networkd[1325]: lxc3bb3611b6c00: Gained carrier Jul 2 11:11:05.076005 systemd-networkd[1325]: lxc3ee6650706f7: Gained carrier Jul 2 11:11:05.790638 systemd-networkd[1325]: cilium_vxlan: Gained IPv6LL Jul 2 11:11:06.046582 systemd-networkd[1325]: lxc_health: Gained IPv6LL Jul 2 11:11:07.070657 systemd-networkd[1325]: lxc3bb3611b6c00: Gained IPv6LL Jul 2 11:11:07.135543 systemd-networkd[1325]: lxc3ee6650706f7: Gained IPv6LL Jul 2 11:11:07.445429 env[1560]: time="2024-07-02T11:11:07.445384171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:11:07.445429 env[1560]: time="2024-07-02T11:11:07.445413532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:11:07.445429 env[1560]: time="2024-07-02T11:11:07.445420801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:11:07.445699 env[1560]: time="2024-07-02T11:11:07.445522993Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de3791075a7b4ab29acba23c7984c751eaaa4d647ddfff7170a5149a2bc50634 pid=4001 runtime=io.containerd.runc.v2 Jul 2 11:11:07.449136 env[1560]: time="2024-07-02T11:11:07.449072969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:11:07.449136 env[1560]: time="2024-07-02T11:11:07.449093925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:11:07.449136 env[1560]: time="2024-07-02T11:11:07.449100693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:11:07.449271 env[1560]: time="2024-07-02T11:11:07.449170557Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e7459c0d71c4c1a29aae32e3e96a95f83d430bfb7c9e1ec8b7b0ed1ae693681 pid=4023 runtime=io.containerd.runc.v2 Jul 2 11:11:07.454841 systemd[1]: Started cri-containerd-de3791075a7b4ab29acba23c7984c751eaaa4d647ddfff7170a5149a2bc50634.scope. Jul 2 11:11:07.457686 systemd[1]: Started cri-containerd-7e7459c0d71c4c1a29aae32e3e96a95f83d430bfb7c9e1ec8b7b0ed1ae693681.scope. Jul 2 11:11:07.476925 env[1560]: time="2024-07-02T11:11:07.476893336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d7m5l,Uid:8f66db10-f16f-4afb-817c-850ce6633381,Namespace:kube-system,Attempt:0,} returns sandbox id \"de3791075a7b4ab29acba23c7984c751eaaa4d647ddfff7170a5149a2bc50634\"" Jul 2 11:11:07.478196 env[1560]: time="2024-07-02T11:11:07.478180033Z" level=info msg="CreateContainer within sandbox \"de3791075a7b4ab29acba23c7984c751eaaa4d647ddfff7170a5149a2bc50634\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 11:11:07.480120 env[1560]: time="2024-07-02T11:11:07.480100301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q8sd7,Uid:ba735ef7-9004-44cb-95b7-8072e7112e69,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e7459c0d71c4c1a29aae32e3e96a95f83d430bfb7c9e1ec8b7b0ed1ae693681\"" Jul 2 11:11:07.481229 env[1560]: time="2024-07-02T11:11:07.481213397Z" level=info msg="CreateContainer within sandbox \"7e7459c0d71c4c1a29aae32e3e96a95f83d430bfb7c9e1ec8b7b0ed1ae693681\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 11:11:07.485419 env[1560]: time="2024-07-02T11:11:07.485401925Z" level=info msg="CreateContainer within sandbox \"de3791075a7b4ab29acba23c7984c751eaaa4d647ddfff7170a5149a2bc50634\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ffbdd9c4777230e90f8e7b742b6040e406ff073bd3d4d4ee731c16ede8a3ec68\"" Jul 2 11:11:07.485661 env[1560]: time="2024-07-02T11:11:07.485620578Z" level=info msg="StartContainer for \"ffbdd9c4777230e90f8e7b742b6040e406ff073bd3d4d4ee731c16ede8a3ec68\"" Jul 2 11:11:07.486432 env[1560]: time="2024-07-02T11:11:07.486415567Z" level=info msg="CreateContainer within sandbox \"7e7459c0d71c4c1a29aae32e3e96a95f83d430bfb7c9e1ec8b7b0ed1ae693681\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"18b8ad519e82f6e5ee8d91e8ea7f27985817911c3556e8a7e5078d3a143f4c56\"" Jul 2 11:11:07.486636 env[1560]: time="2024-07-02T11:11:07.486595536Z" level=info msg="StartContainer for \"18b8ad519e82f6e5ee8d91e8ea7f27985817911c3556e8a7e5078d3a143f4c56\"" Jul 2 11:11:07.513429 systemd[1]: Started cri-containerd-ffbdd9c4777230e90f8e7b742b6040e406ff073bd3d4d4ee731c16ede8a3ec68.scope. Jul 2 11:11:07.515414 systemd[1]: Started cri-containerd-18b8ad519e82f6e5ee8d91e8ea7f27985817911c3556e8a7e5078d3a143f4c56.scope. Jul 2 11:11:07.535705 env[1560]: time="2024-07-02T11:11:07.535669275Z" level=info msg="StartContainer for \"18b8ad519e82f6e5ee8d91e8ea7f27985817911c3556e8a7e5078d3a143f4c56\" returns successfully" Jul 2 11:11:07.535813 env[1560]: time="2024-07-02T11:11:07.535670152Z" level=info msg="StartContainer for \"ffbdd9c4777230e90f8e7b742b6040e406ff073bd3d4d4ee731c16ede8a3ec68\" returns successfully" Jul 2 11:11:07.953327 kubelet[2591]: I0702 11:11:07.953199 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-d7m5l" podStartSLOduration=15.953149855 podStartE2EDuration="15.953149855s" podCreationTimestamp="2024-07-02 11:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:11:07.951261382 +0000 UTC m=+32.240830918" watchObservedRunningTime="2024-07-02 11:11:07.953149855 +0000 UTC m=+32.242719405" Jul 2 11:11:07.987333 kubelet[2591]: I0702 11:11:07.987226 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-q8sd7" podStartSLOduration=15.987187444 podStartE2EDuration="15.987187444s" podCreationTimestamp="2024-07-02 11:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:11:07.987171168 +0000 UTC m=+32.276740744" watchObservedRunningTime="2024-07-02 11:11:07.987187444 +0000 UTC m=+32.276756983" Jul 2 11:11:08.452349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3344290575.mount: Deactivated successfully. Jul 2 11:12:59.965929 update_engine[1553]: I0702 11:12:59.965724 1553 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 2 11:12:59.965929 update_engine[1553]: I0702 11:12:59.965806 1553 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 2 11:12:59.967148 update_engine[1553]: I0702 11:12:59.966704 1553 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 2 11:12:59.967723 update_engine[1553]: I0702 11:12:59.967675 1553 omaha_request_params.cc:62] Current group set to lts Jul 2 11:12:59.968053 update_engine[1553]: I0702 11:12:59.967976 1553 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 2 11:12:59.968053 update_engine[1553]: I0702 11:12:59.967998 1553 update_attempter.cc:643] Scheduling an action processor start. Jul 2 11:12:59.968053 update_engine[1553]: I0702 11:12:59.968031 1553 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 11:12:59.968511 update_engine[1553]: I0702 11:12:59.968096 1553 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 2 11:12:59.968511 update_engine[1553]: I0702 11:12:59.968236 1553 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 2 11:12:59.968511 update_engine[1553]: I0702 11:12:59.968253 1553 omaha_request_action.cc:271] Request: Jul 2 11:12:59.968511 update_engine[1553]: Jul 2 11:12:59.968511 update_engine[1553]: Jul 2 11:12:59.968511 update_engine[1553]: Jul 2 11:12:59.968511 update_engine[1553]: Jul 2 11:12:59.968511 update_engine[1553]: Jul 2 11:12:59.968511 update_engine[1553]: Jul 2 11:12:59.968511 update_engine[1553]: Jul 2 11:12:59.968511 update_engine[1553]: Jul 2 11:12:59.968511 update_engine[1553]: I0702 11:12:59.968264 1553 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 11:12:59.969639 locksmithd[1594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 2 11:12:59.971400 update_engine[1553]: I0702 11:12:59.971319 1553 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 11:12:59.971645 update_engine[1553]: E0702 11:12:59.971571 1553 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 11:12:59.971792 update_engine[1553]: I0702 11:12:59.971733 1553 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 2 11:13:09.931127 update_engine[1553]: I0702 11:13:09.931000 1553 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 11:13:09.932133 update_engine[1553]: I0702 11:13:09.931540 1553 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 11:13:09.932133 update_engine[1553]: E0702 11:13:09.931757 1553 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 11:13:09.932133 update_engine[1553]: I0702 11:13:09.931941 1553 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 2 11:13:19.935129 update_engine[1553]: I0702 11:13:19.934953 1553 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 11:13:19.936101 update_engine[1553]: I0702 11:13:19.935498 1553 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 11:13:19.936101 update_engine[1553]: E0702 11:13:19.935708 1553 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 11:13:19.936101 update_engine[1553]: I0702 11:13:19.935881 1553 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 2 11:13:29.933852 update_engine[1553]: I0702 11:13:29.933727 1553 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 11:13:29.935281 update_engine[1553]: I0702 11:13:29.934242 1553 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 11:13:29.935281 update_engine[1553]: E0702 11:13:29.934463 1553 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 11:13:29.935281 update_engine[1553]: I0702 11:13:29.934651 1553 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 11:13:29.935281 update_engine[1553]: I0702 11:13:29.934669 1553 omaha_request_action.cc:621] Omaha request response: Jul 2 11:13:29.935281 update_engine[1553]: E0702 11:13:29.934815 1553 omaha_request_action.cc:640] Omaha request network transfer failed. Jul 2 11:13:29.935281 update_engine[1553]: I0702 11:13:29.934845 1553 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 2 11:13:29.935281 update_engine[1553]: I0702 11:13:29.934855 1553 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 11:13:29.935281 update_engine[1553]: I0702 11:13:29.934865 1553 update_attempter.cc:306] Processing Done. Jul 2 11:13:29.935281 update_engine[1553]: E0702 11:13:29.934890 1553 update_attempter.cc:619] Update failed. Jul 2 11:13:29.935281 update_engine[1553]: I0702 11:13:29.934899 1553 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 2 11:13:29.935281 update_engine[1553]: I0702 11:13:29.934908 1553 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 2 11:13:29.935281 update_engine[1553]: I0702 11:13:29.934918 1553 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 2 11:13:29.935281 update_engine[1553]: I0702 11:13:29.935072 1553 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 11:13:29.935281 update_engine[1553]: I0702 11:13:29.935124 1553 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 2 11:13:29.935281 update_engine[1553]: I0702 11:13:29.935135 1553 omaha_request_action.cc:271] Request: Jul 2 11:13:29.935281 update_engine[1553]: Jul 2 11:13:29.935281 update_engine[1553]: Jul 2 11:13:29.936931 update_engine[1553]: Jul 2 11:13:29.936931 update_engine[1553]: Jul 2 11:13:29.936931 update_engine[1553]: Jul 2 11:13:29.936931 update_engine[1553]: Jul 2 11:13:29.936931 update_engine[1553]: I0702 11:13:29.935146 1553 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 11:13:29.936931 update_engine[1553]: I0702 11:13:29.935516 1553 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 11:13:29.936931 update_engine[1553]: E0702 11:13:29.935688 1553 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 11:13:29.936931 update_engine[1553]: I0702 11:13:29.935824 1553 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 11:13:29.936931 update_engine[1553]: I0702 11:13:29.935840 1553 omaha_request_action.cc:621] Omaha request response: Jul 2 11:13:29.936931 update_engine[1553]: I0702 11:13:29.935851 1553 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 11:13:29.936931 update_engine[1553]: I0702 11:13:29.935859 1553 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 11:13:29.936931 update_engine[1553]: I0702 11:13:29.935868 1553 update_attempter.cc:306] Processing Done. Jul 2 11:13:29.936931 update_engine[1553]: I0702 11:13:29.935875 1553 update_attempter.cc:310] Error event sent. Jul 2 11:13:29.936931 update_engine[1553]: I0702 11:13:29.935896 1553 update_check_scheduler.cc:74] Next update check in 48m40s Jul 2 11:13:29.938208 locksmithd[1594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 2 11:13:29.938208 locksmithd[1594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 2 11:16:05.037332 systemd[1]: Started sshd@7-145.40.90.137:22-203.135.101.182:34632.service. Jul 2 11:17:44.344760 systemd[1]: Started sshd@8-145.40.90.137:22-139.178.68.195:43120.service. Jul 2 11:17:44.376714 sshd[4225]: Accepted publickey for core from 139.178.68.195 port 43120 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:17:44.377472 sshd[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:17:44.380144 systemd-logind[1551]: New session 10 of user core. Jul 2 11:17:44.380679 systemd[1]: Started session-10.scope. Jul 2 11:17:44.471827 sshd[4225]: pam_unix(sshd:session): session closed for user core Jul 2 11:17:44.473266 systemd[1]: sshd@8-145.40.90.137:22-139.178.68.195:43120.service: Deactivated successfully. Jul 2 11:17:44.473752 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 11:17:44.474097 systemd-logind[1551]: Session 10 logged out. Waiting for processes to exit. Jul 2 11:17:44.474480 systemd-logind[1551]: Removed session 10. Jul 2 11:17:49.482855 systemd[1]: Started sshd@9-145.40.90.137:22-139.178.68.195:43136.service. Jul 2 11:17:49.513926 sshd[4253]: Accepted publickey for core from 139.178.68.195 port 43136 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:17:49.514658 sshd[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:17:49.517169 systemd-logind[1551]: New session 11 of user core. Jul 2 11:17:49.517657 systemd[1]: Started session-11.scope. Jul 2 11:17:49.601006 sshd[4253]: pam_unix(sshd:session): session closed for user core Jul 2 11:17:49.602553 systemd[1]: sshd@9-145.40.90.137:22-139.178.68.195:43136.service: Deactivated successfully. Jul 2 11:17:49.602972 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 11:17:49.603334 systemd-logind[1551]: Session 11 logged out. Waiting for processes to exit. Jul 2 11:17:49.604012 systemd-logind[1551]: Removed session 11. Jul 2 11:17:54.610403 systemd[1]: Started sshd@10-145.40.90.137:22-139.178.68.195:33788.service. Jul 2 11:17:54.638720 sshd[4283]: Accepted publickey for core from 139.178.68.195 port 33788 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:17:54.639590 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:17:54.642557 systemd-logind[1551]: New session 12 of user core. Jul 2 11:17:54.643158 systemd[1]: Started session-12.scope. Jul 2 11:17:54.735368 sshd[4283]: pam_unix(sshd:session): session closed for user core Jul 2 11:17:54.736976 systemd[1]: sshd@10-145.40.90.137:22-139.178.68.195:33788.service: Deactivated successfully. Jul 2 11:17:54.737392 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 11:17:54.737839 systemd-logind[1551]: Session 12 logged out. Waiting for processes to exit. Jul 2 11:17:54.738380 systemd-logind[1551]: Removed session 12. Jul 2 11:17:59.746919 systemd[1]: Started sshd@11-145.40.90.137:22-139.178.68.195:33800.service. Jul 2 11:17:59.778225 sshd[4310]: Accepted publickey for core from 139.178.68.195 port 33800 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:17:59.778944 sshd[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:17:59.781467 systemd-logind[1551]: New session 13 of user core. Jul 2 11:17:59.782025 systemd[1]: Started session-13.scope. Jul 2 11:17:59.869931 sshd[4310]: pam_unix(sshd:session): session closed for user core Jul 2 11:17:59.871800 systemd[1]: sshd@11-145.40.90.137:22-139.178.68.195:33800.service: Deactivated successfully. Jul 2 11:17:59.872149 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 11:17:59.872443 systemd-logind[1551]: Session 13 logged out. Waiting for processes to exit. Jul 2 11:17:59.873027 systemd[1]: Started sshd@12-145.40.90.137:22-139.178.68.195:33812.service. Jul 2 11:17:59.873377 systemd-logind[1551]: Removed session 13. Jul 2 11:17:59.900746 sshd[4336]: Accepted publickey for core from 139.178.68.195 port 33812 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:17:59.901464 sshd[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:17:59.903875 systemd-logind[1551]: New session 14 of user core. Jul 2 11:17:59.904353 systemd[1]: Started session-14.scope. Jul 2 11:18:00.034203 sshd[4336]: pam_unix(sshd:session): session closed for user core Jul 2 11:18:00.036098 systemd[1]: sshd@12-145.40.90.137:22-139.178.68.195:33812.service: Deactivated successfully. Jul 2 11:18:00.036471 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 11:18:00.036825 systemd-logind[1551]: Session 14 logged out. Waiting for processes to exit. Jul 2 11:18:00.037507 systemd[1]: Started sshd@13-145.40.90.137:22-139.178.68.195:33820.service. Jul 2 11:18:00.037906 systemd-logind[1551]: Removed session 14. Jul 2 11:18:00.066214 sshd[4360]: Accepted publickey for core from 139.178.68.195 port 33820 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:18:00.067066 sshd[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:18:00.069690 systemd-logind[1551]: New session 15 of user core. Jul 2 11:18:00.070148 systemd[1]: Started session-15.scope. Jul 2 11:18:00.188153 sshd[4360]: pam_unix(sshd:session): session closed for user core Jul 2 11:18:00.190488 systemd[1]: sshd@13-145.40.90.137:22-139.178.68.195:33820.service: Deactivated successfully. Jul 2 11:18:00.191177 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 11:18:00.191824 systemd-logind[1551]: Session 15 logged out. Waiting for processes to exit. Jul 2 11:18:00.192686 systemd-logind[1551]: Removed session 15. Jul 2 11:18:05.044340 systemd[1]: sshd@7-145.40.90.137:22-203.135.101.182:34632.service: Deactivated successfully. Jul 2 11:18:05.197688 systemd[1]: Started sshd@14-145.40.90.137:22-139.178.68.195:35408.service. Jul 2 11:18:05.225744 sshd[4387]: Accepted publickey for core from 139.178.68.195 port 35408 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:18:05.226593 sshd[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:18:05.229163 systemd-logind[1551]: New session 16 of user core. Jul 2 11:18:05.229700 systemd[1]: Started session-16.scope. Jul 2 11:18:05.315130 sshd[4387]: pam_unix(sshd:session): session closed for user core Jul 2 11:18:05.316666 systemd[1]: sshd@14-145.40.90.137:22-139.178.68.195:35408.service: Deactivated successfully. Jul 2 11:18:05.317108 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 11:18:05.317420 systemd-logind[1551]: Session 16 logged out. Waiting for processes to exit. Jul 2 11:18:05.318073 systemd-logind[1551]: Removed session 16. Jul 2 11:18:08.536612 systemd[1]: Started sshd@15-145.40.90.137:22-203.135.101.182:52802.service. Jul 2 11:18:10.324369 systemd[1]: Started sshd@16-145.40.90.137:22-139.178.68.195:35414.service. Jul 2 11:18:10.356939 sshd[4414]: Accepted publickey for core from 139.178.68.195 port 35414 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:18:10.357645 sshd[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:18:10.360089 systemd-logind[1551]: New session 17 of user core. Jul 2 11:18:10.360511 systemd[1]: Started session-17.scope. Jul 2 11:18:10.444522 sshd[4414]: pam_unix(sshd:session): session closed for user core Jul 2 11:18:10.446322 systemd[1]: sshd@16-145.40.90.137:22-139.178.68.195:35414.service: Deactivated successfully. Jul 2 11:18:10.446684 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 11:18:10.447083 systemd-logind[1551]: Session 17 logged out. Waiting for processes to exit. Jul 2 11:18:10.447686 systemd[1]: Started sshd@17-145.40.90.137:22-139.178.68.195:35420.service. Jul 2 11:18:10.448117 systemd-logind[1551]: Removed session 17. Jul 2 11:18:10.475550 sshd[4440]: Accepted publickey for core from 139.178.68.195 port 35420 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:18:10.476368 sshd[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:18:10.478904 systemd-logind[1551]: New session 18 of user core. Jul 2 11:18:10.479387 systemd[1]: Started session-18.scope. Jul 2 11:18:10.627262 sshd[4440]: pam_unix(sshd:session): session closed for user core Jul 2 11:18:10.628996 systemd[1]: sshd@17-145.40.90.137:22-139.178.68.195:35420.service: Deactivated successfully. Jul 2 11:18:10.629354 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 11:18:10.629713 systemd-logind[1551]: Session 18 logged out. Waiting for processes to exit. Jul 2 11:18:10.630365 systemd[1]: Started sshd@18-145.40.90.137:22-139.178.68.195:35422.service. Jul 2 11:18:10.630880 systemd-logind[1551]: Removed session 18. Jul 2 11:18:10.669471 sshd[4463]: Accepted publickey for core from 139.178.68.195 port 35422 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:18:10.670409 sshd[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:18:10.673560 systemd-logind[1551]: New session 19 of user core. Jul 2 11:18:10.674239 systemd[1]: Started session-19.scope. Jul 2 11:18:11.744876 sshd[4463]: pam_unix(sshd:session): session closed for user core Jul 2 11:18:11.747281 systemd[1]: sshd@18-145.40.90.137:22-139.178.68.195:35422.service: Deactivated successfully. Jul 2 11:18:11.747745 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 11:18:11.748148 systemd-logind[1551]: Session 19 logged out. Waiting for processes to exit. Jul 2 11:18:11.748977 systemd[1]: Started sshd@19-145.40.90.137:22-139.178.68.195:35426.service. Jul 2 11:18:11.749625 systemd-logind[1551]: Removed session 19. Jul 2 11:18:11.783332 sshd[4493]: Accepted publickey for core from 139.178.68.195 port 35426 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:18:11.784292 sshd[4493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:18:11.787020 systemd-logind[1551]: New session 20 of user core. Jul 2 11:18:11.787603 systemd[1]: Started session-20.scope. Jul 2 11:18:11.994540 sshd[4493]: pam_unix(sshd:session): session closed for user core Jul 2 11:18:11.996850 systemd[1]: sshd@19-145.40.90.137:22-139.178.68.195:35426.service: Deactivated successfully. Jul 2 11:18:11.997368 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 11:18:11.997899 systemd-logind[1551]: Session 20 logged out. Waiting for processes to exit. Jul 2 11:18:11.998558 systemd[1]: Started sshd@20-145.40.90.137:22-139.178.68.195:35440.service. Jul 2 11:18:11.999004 systemd-logind[1551]: Removed session 20. Jul 2 11:18:12.027645 sshd[4519]: Accepted publickey for core from 139.178.68.195 port 35440 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:18:12.028373 sshd[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:18:12.031025 systemd-logind[1551]: New session 21 of user core. Jul 2 11:18:12.031538 systemd[1]: Started session-21.scope. Jul 2 11:18:12.163192 sshd[4519]: pam_unix(sshd:session): session closed for user core Jul 2 11:18:12.165005 systemd[1]: sshd@20-145.40.90.137:22-139.178.68.195:35440.service: Deactivated successfully. Jul 2 11:18:12.165528 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 11:18:12.166039 systemd-logind[1551]: Session 21 logged out. Waiting for processes to exit. Jul 2 11:18:12.166726 systemd-logind[1551]: Removed session 21. Jul 2 11:18:17.175233 systemd[1]: Started sshd@21-145.40.90.137:22-139.178.68.195:34678.service. Jul 2 11:18:17.207366 sshd[4548]: Accepted publickey for core from 139.178.68.195 port 34678 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:18:17.208139 sshd[4548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:18:17.210866 systemd-logind[1551]: New session 22 of user core. Jul 2 11:18:17.211476 systemd[1]: Started session-22.scope. Jul 2 11:18:17.294473 sshd[4548]: pam_unix(sshd:session): session closed for user core Jul 2 11:18:17.295960 systemd[1]: sshd@21-145.40.90.137:22-139.178.68.195:34678.service: Deactivated successfully. Jul 2 11:18:17.296390 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 11:18:17.296774 systemd-logind[1551]: Session 22 logged out. Waiting for processes to exit. Jul 2 11:18:17.297310 systemd-logind[1551]: Removed session 22. Jul 2 11:18:22.304001 systemd[1]: Started sshd@22-145.40.90.137:22-139.178.68.195:34682.service. Jul 2 11:18:22.332066 sshd[4573]: Accepted publickey for core from 139.178.68.195 port 34682 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:18:22.332953 sshd[4573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:18:22.335499 systemd-logind[1551]: New session 23 of user core. Jul 2 11:18:22.336146 systemd[1]: Started session-23.scope. Jul 2 11:18:22.421179 sshd[4573]: pam_unix(sshd:session): session closed for user core Jul 2 11:18:22.422515 systemd[1]: sshd@22-145.40.90.137:22-139.178.68.195:34682.service: Deactivated successfully. Jul 2 11:18:22.422977 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 11:18:22.423320 systemd-logind[1551]: Session 23 logged out. Waiting for processes to exit. Jul 2 11:18:22.423927 systemd-logind[1551]: Removed session 23. Jul 2 11:18:27.430516 systemd[1]: Started sshd@23-145.40.90.137:22-139.178.68.195:34268.service. Jul 2 11:18:27.458902 sshd[4599]: Accepted publickey for core from 139.178.68.195 port 34268 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:18:27.459918 sshd[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:18:27.462873 systemd-logind[1551]: New session 24 of user core. Jul 2 11:18:27.463443 systemd[1]: Started session-24.scope. Jul 2 11:18:27.550455 sshd[4599]: pam_unix(sshd:session): session closed for user core Jul 2 11:18:27.552331 systemd[1]: sshd@23-145.40.90.137:22-139.178.68.195:34268.service: Deactivated successfully. Jul 2 11:18:27.552712 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 11:18:27.553033 systemd-logind[1551]: Session 24 logged out. Waiting for processes to exit. Jul 2 11:18:27.553637 systemd[1]: Started sshd@24-145.40.90.137:22-139.178.68.195:34272.service. Jul 2 11:18:27.554029 systemd-logind[1551]: Removed session 24. Jul 2 11:18:27.581998 sshd[4623]: Accepted publickey for core from 139.178.68.195 port 34272 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:18:27.582825 sshd[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:18:27.585443 systemd-logind[1551]: New session 25 of user core. Jul 2 11:18:27.586150 systemd[1]: Started session-25.scope. Jul 2 11:18:28.897576 env[1560]: time="2024-07-02T11:18:28.897545346Z" level=info msg="StopContainer for \"d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5\" with timeout 30 (s)" Jul 2 11:18:28.897867 env[1560]: time="2024-07-02T11:18:28.897817082Z" level=info msg="Stop container \"d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5\" with signal terminated" Jul 2 11:18:28.903535 systemd[1]: cri-containerd-d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5.scope: Deactivated successfully. Jul 2 11:18:28.903734 systemd[1]: cri-containerd-d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5.scope: Consumed 1.016s CPU time. Jul 2 11:18:28.909373 env[1560]: time="2024-07-02T11:18:28.909332569Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 11:18:28.912303 env[1560]: time="2024-07-02T11:18:28.912286060Z" level=info msg="StopContainer for \"9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab\" with timeout 2 (s)" Jul 2 11:18:28.912413 env[1560]: time="2024-07-02T11:18:28.912398960Z" level=info msg="Stop container \"9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab\" with signal terminated" Jul 2 11:18:28.913873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5-rootfs.mount: Deactivated successfully. Jul 2 11:18:28.914674 env[1560]: time="2024-07-02T11:18:28.914651714Z" level=info msg="shim disconnected" id=d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5 Jul 2 11:18:28.914729 env[1560]: time="2024-07-02T11:18:28.914677128Z" level=warning msg="cleaning up after shim disconnected" id=d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5 namespace=k8s.io Jul 2 11:18:28.914729 env[1560]: time="2024-07-02T11:18:28.914684445Z" level=info msg="cleaning up dead shim" Jul 2 11:18:28.916064 systemd-networkd[1325]: lxc_health: Link DOWN Jul 2 11:18:28.916068 systemd-networkd[1325]: lxc_health: Lost carrier Jul 2 11:18:28.918782 env[1560]: time="2024-07-02T11:18:28.918762291Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:18:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4690 runtime=io.containerd.runc.v2\n" Jul 2 11:18:28.919462 env[1560]: time="2024-07-02T11:18:28.919447167Z" level=info msg="StopContainer for \"d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5\" returns successfully" Jul 2 11:18:28.919859 env[1560]: time="2024-07-02T11:18:28.919846197Z" level=info msg="StopPodSandbox for \"5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe\"" Jul 2 11:18:28.919896 env[1560]: time="2024-07-02T11:18:28.919884082Z" level=info msg="Container to stop \"d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:18:28.921241 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe-shm.mount: Deactivated successfully. Jul 2 11:18:28.948025 systemd[1]: cri-containerd-5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe.scope: Deactivated successfully. Jul 2 11:18:28.964981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe-rootfs.mount: Deactivated successfully. Jul 2 11:18:28.995306 systemd[1]: cri-containerd-9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab.scope: Deactivated successfully. Jul 2 11:18:28.995895 systemd[1]: cri-containerd-9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab.scope: Consumed 6.961s CPU time. Jul 2 11:18:29.001668 env[1560]: time="2024-07-02T11:18:29.001463953Z" level=info msg="shim disconnected" id=5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe Jul 2 11:18:29.001944 env[1560]: time="2024-07-02T11:18:29.001648058Z" level=warning msg="cleaning up after shim disconnected" id=5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe namespace=k8s.io Jul 2 11:18:29.001944 env[1560]: time="2024-07-02T11:18:29.001707211Z" level=info msg="cleaning up dead shim" Jul 2 11:18:29.018906 env[1560]: time="2024-07-02T11:18:29.018787344Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:18:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4725 runtime=io.containerd.runc.v2\n" Jul 2 11:18:29.019651 env[1560]: time="2024-07-02T11:18:29.019549813Z" level=info msg="TearDown network for sandbox \"5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe\" successfully" Jul 2 11:18:29.019651 env[1560]: time="2024-07-02T11:18:29.019610007Z" level=info msg="StopPodSandbox for \"5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe\" returns successfully" Jul 2 11:18:29.037943 env[1560]: time="2024-07-02T11:18:29.037840170Z" level=info msg="shim disconnected" id=9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab Jul 2 11:18:29.038327 env[1560]: time="2024-07-02T11:18:29.037946997Z" level=warning msg="cleaning up after shim disconnected" id=9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab namespace=k8s.io Jul 2 11:18:29.038327 env[1560]: time="2024-07-02T11:18:29.037983626Z" level=info msg="cleaning up dead shim" Jul 2 11:18:29.042531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab-rootfs.mount: Deactivated successfully. Jul 2 11:18:29.055966 env[1560]: time="2024-07-02T11:18:29.055879459Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:18:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4748 runtime=io.containerd.runc.v2\n" Jul 2 11:18:29.058122 env[1560]: time="2024-07-02T11:18:29.057997967Z" level=info msg="StopContainer for \"9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab\" returns successfully" Jul 2 11:18:29.059027 env[1560]: time="2024-07-02T11:18:29.058952949Z" level=info msg="StopPodSandbox for \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\"" Jul 2 11:18:29.059324 env[1560]: time="2024-07-02T11:18:29.059090348Z" level=info msg="Container to stop \"ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:18:29.059324 env[1560]: time="2024-07-02T11:18:29.059135450Z" level=info msg="Container to stop \"aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:18:29.059324 env[1560]: time="2024-07-02T11:18:29.059168178Z" level=info msg="Container to stop \"c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:18:29.059324 env[1560]: time="2024-07-02T11:18:29.059198871Z" level=info msg="Container to stop \"9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:18:29.059324 env[1560]: time="2024-07-02T11:18:29.059233549Z" level=info msg="Container to stop \"5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:18:29.073728 systemd[1]: cri-containerd-c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2.scope: Deactivated successfully. Jul 2 11:18:29.106780 env[1560]: time="2024-07-02T11:18:29.106726264Z" level=info msg="shim disconnected" id=c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2 Jul 2 11:18:29.106780 env[1560]: time="2024-07-02T11:18:29.106780410Z" level=warning msg="cleaning up after shim disconnected" id=c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2 namespace=k8s.io Jul 2 11:18:29.107016 env[1560]: time="2024-07-02T11:18:29.106792861Z" level=info msg="cleaning up dead shim" Jul 2 11:18:29.114220 env[1560]: time="2024-07-02T11:18:29.114155487Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:18:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4780 runtime=io.containerd.runc.v2\n" Jul 2 11:18:29.114487 env[1560]: time="2024-07-02T11:18:29.114455350Z" level=info msg="TearDown network for sandbox \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\" successfully" Jul 2 11:18:29.114553 env[1560]: time="2024-07-02T11:18:29.114490024Z" level=info msg="StopPodSandbox for \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\" returns successfully" Jul 2 11:18:29.164826 kubelet[2591]: I0702 11:18:29.164619 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9490ced8-96d8-469f-a696-8effe9f4ecfd-cilium-config-path\") pod \"9490ced8-96d8-469f-a696-8effe9f4ecfd\" (UID: \"9490ced8-96d8-469f-a696-8effe9f4ecfd\") " Jul 2 11:18:29.164826 kubelet[2591]: I0702 11:18:29.164748 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28nd2\" (UniqueName: \"kubernetes.io/projected/9490ced8-96d8-469f-a696-8effe9f4ecfd-kube-api-access-28nd2\") pod \"9490ced8-96d8-469f-a696-8effe9f4ecfd\" (UID: \"9490ced8-96d8-469f-a696-8effe9f4ecfd\") " Jul 2 11:18:29.170102 kubelet[2591]: I0702 11:18:29.169997 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9490ced8-96d8-469f-a696-8effe9f4ecfd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9490ced8-96d8-469f-a696-8effe9f4ecfd" (UID: "9490ced8-96d8-469f-a696-8effe9f4ecfd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 11:18:29.171550 kubelet[2591]: I0702 11:18:29.171407 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9490ced8-96d8-469f-a696-8effe9f4ecfd-kube-api-access-28nd2" (OuterVolumeSpecName: "kube-api-access-28nd2") pod "9490ced8-96d8-469f-a696-8effe9f4ecfd" (UID: "9490ced8-96d8-469f-a696-8effe9f4ecfd"). InnerVolumeSpecName "kube-api-access-28nd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 11:18:29.255276 kubelet[2591]: I0702 11:18:29.255202 2591 scope.go:117] "RemoveContainer" containerID="9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab" Jul 2 11:18:29.261413 env[1560]: time="2024-07-02T11:18:29.261331973Z" level=info msg="RemoveContainer for \"9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab\"" Jul 2 11:18:29.265218 kubelet[2591]: I0702 11:18:29.265135 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8rpk\" (UniqueName: \"kubernetes.io/projected/2a2a4868-17ec-4ab6-a62f-61e95c751c96-kube-api-access-t8rpk\") pod \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " Jul 2 11:18:29.265508 kubelet[2591]: I0702 11:18:29.265263 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-cilium-cgroup\") pod \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " Jul 2 11:18:29.265508 kubelet[2591]: I0702 11:18:29.265349 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-etc-cni-netd\") pod \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " Jul 2 11:18:29.265508 kubelet[2591]: I0702 11:18:29.265389 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2a2a4868-17ec-4ab6-a62f-61e95c751c96" (UID: "2a2a4868-17ec-4ab6-a62f-61e95c751c96"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:29.265508 kubelet[2591]: I0702 11:18:29.265444 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a2a4868-17ec-4ab6-a62f-61e95c751c96-clustermesh-secrets\") pod \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " Jul 2 11:18:29.266011 kubelet[2591]: I0702 11:18:29.265553 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-xtables-lock\") pod \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " Jul 2 11:18:29.266011 kubelet[2591]: I0702 11:18:29.265640 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-bpf-maps\") pod \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " Jul 2 11:18:29.266011 kubelet[2591]: I0702 11:18:29.265733 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a2a4868-17ec-4ab6-a62f-61e95c751c96-cilium-config-path\") pod \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " Jul 2 11:18:29.266011 kubelet[2591]: I0702 11:18:29.265535 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2a2a4868-17ec-4ab6-a62f-61e95c751c96" (UID: "2a2a4868-17ec-4ab6-a62f-61e95c751c96"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:29.266011 kubelet[2591]: I0702 11:18:29.265739 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2a2a4868-17ec-4ab6-a62f-61e95c751c96" (UID: "2a2a4868-17ec-4ab6-a62f-61e95c751c96"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:29.266011 kubelet[2591]: I0702 11:18:29.265822 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a2a4868-17ec-4ab6-a62f-61e95c751c96-hubble-tls\") pod \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " Jul 2 11:18:29.267049 kubelet[2591]: I0702 11:18:29.265722 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2a2a4868-17ec-4ab6-a62f-61e95c751c96" (UID: "2a2a4868-17ec-4ab6-a62f-61e95c751c96"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:29.267049 kubelet[2591]: I0702 11:18:29.265901 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-hostproc\") pod \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " Jul 2 11:18:29.267049 kubelet[2591]: I0702 11:18:29.266026 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-cilium-run\") pod \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " Jul 2 11:18:29.267049 kubelet[2591]: I0702 11:18:29.266083 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-host-proc-sys-kernel\") pod \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " Jul 2 11:18:29.267049 kubelet[2591]: I0702 11:18:29.266129 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-lib-modules\") pod \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " Jul 2 11:18:29.267049 kubelet[2591]: I0702 11:18:29.266089 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-hostproc" (OuterVolumeSpecName: "hostproc") pod "2a2a4868-17ec-4ab6-a62f-61e95c751c96" (UID: "2a2a4868-17ec-4ab6-a62f-61e95c751c96"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:29.267956 env[1560]: time="2024-07-02T11:18:29.266238813Z" level=info msg="RemoveContainer for \"9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab\" returns successfully" Jul 2 11:18:29.268142 kubelet[2591]: I0702 11:18:29.266175 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-cni-path\") pod \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " Jul 2 11:18:29.268142 kubelet[2591]: I0702 11:18:29.266177 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2a2a4868-17ec-4ab6-a62f-61e95c751c96" (UID: "2a2a4868-17ec-4ab6-a62f-61e95c751c96"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:29.268142 kubelet[2591]: I0702 11:18:29.266215 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2a2a4868-17ec-4ab6-a62f-61e95c751c96" (UID: "2a2a4868-17ec-4ab6-a62f-61e95c751c96"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:29.268142 kubelet[2591]: I0702 11:18:29.266237 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-host-proc-sys-net\") pod \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\" (UID: \"2a2a4868-17ec-4ab6-a62f-61e95c751c96\") " Jul 2 11:18:29.268142 kubelet[2591]: I0702 11:18:29.266307 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2a2a4868-17ec-4ab6-a62f-61e95c751c96" (UID: "2a2a4868-17ec-4ab6-a62f-61e95c751c96"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:29.268833 kubelet[2591]: I0702 11:18:29.266302 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2a2a4868-17ec-4ab6-a62f-61e95c751c96" (UID: "2a2a4868-17ec-4ab6-a62f-61e95c751c96"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:29.268833 kubelet[2591]: I0702 11:18:29.266313 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-cni-path" (OuterVolumeSpecName: "cni-path") pod "2a2a4868-17ec-4ab6-a62f-61e95c751c96" (UID: "2a2a4868-17ec-4ab6-a62f-61e95c751c96"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:29.268833 kubelet[2591]: I0702 11:18:29.266422 2591 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9490ced8-96d8-469f-a696-8effe9f4ecfd-cilium-config-path\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:29.268833 kubelet[2591]: I0702 11:18:29.266458 2591 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-hostproc\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:29.268833 kubelet[2591]: I0702 11:18:29.266528 2591 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-cilium-run\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:29.268833 kubelet[2591]: I0702 11:18:29.266564 2591 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-host-proc-sys-kernel\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:29.268833 kubelet[2591]: I0702 11:18:29.266592 2591 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-host-proc-sys-net\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:29.269718 kubelet[2591]: I0702 11:18:29.266620 2591 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-cilium-cgroup\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:29.269718 kubelet[2591]: I0702 11:18:29.266651 2591 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-etc-cni-netd\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:29.269718 kubelet[2591]: I0702 11:18:29.266677 2591 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-28nd2\" (UniqueName: \"kubernetes.io/projected/9490ced8-96d8-469f-a696-8effe9f4ecfd-kube-api-access-28nd2\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:29.269718 kubelet[2591]: I0702 11:18:29.266721 2591 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-xtables-lock\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:29.269718 kubelet[2591]: I0702 11:18:29.266770 2591 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-bpf-maps\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:29.269718 kubelet[2591]: I0702 11:18:29.266826 2591 scope.go:117] "RemoveContainer" containerID="c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc" Jul 2 11:18:29.270301 env[1560]: time="2024-07-02T11:18:29.269860448Z" level=info msg="RemoveContainer for \"c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc\"" Jul 2 11:18:29.271571 kubelet[2591]: I0702 11:18:29.271504 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a2a4868-17ec-4ab6-a62f-61e95c751c96-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2a2a4868-17ec-4ab6-a62f-61e95c751c96" (UID: "2a2a4868-17ec-4ab6-a62f-61e95c751c96"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 11:18:29.271988 systemd[1]: Removed slice kubepods-besteffort-pod9490ced8_96d8_469f_a696_8effe9f4ecfd.slice. Jul 2 11:18:29.272256 systemd[1]: kubepods-besteffort-pod9490ced8_96d8_469f_a696_8effe9f4ecfd.slice: Consumed 1.046s CPU time. Jul 2 11:18:29.272920 kubelet[2591]: I0702 11:18:29.272810 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a2a4868-17ec-4ab6-a62f-61e95c751c96-kube-api-access-t8rpk" (OuterVolumeSpecName: "kube-api-access-t8rpk") pod "2a2a4868-17ec-4ab6-a62f-61e95c751c96" (UID: "2a2a4868-17ec-4ab6-a62f-61e95c751c96"). InnerVolumeSpecName "kube-api-access-t8rpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 11:18:29.273476 kubelet[2591]: I0702 11:18:29.273418 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a2a4868-17ec-4ab6-a62f-61e95c751c96-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2a2a4868-17ec-4ab6-a62f-61e95c751c96" (UID: "2a2a4868-17ec-4ab6-a62f-61e95c751c96"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 11:18:29.274329 kubelet[2591]: I0702 11:18:29.274216 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a2a4868-17ec-4ab6-a62f-61e95c751c96-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2a2a4868-17ec-4ab6-a62f-61e95c751c96" (UID: "2a2a4868-17ec-4ab6-a62f-61e95c751c96"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 11:18:29.274591 env[1560]: time="2024-07-02T11:18:29.274366982Z" level=info msg="RemoveContainer for \"c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc\" returns successfully" Jul 2 11:18:29.274795 kubelet[2591]: I0702 11:18:29.274744 2591 scope.go:117] "RemoveContainer" containerID="5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7" Jul 2 11:18:29.277238 env[1560]: time="2024-07-02T11:18:29.277142385Z" level=info msg="RemoveContainer for \"5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7\"" Jul 2 11:18:29.281460 env[1560]: time="2024-07-02T11:18:29.281357483Z" level=info msg="RemoveContainer for \"5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7\" returns successfully" Jul 2 11:18:29.281838 kubelet[2591]: I0702 11:18:29.281747 2591 scope.go:117] "RemoveContainer" containerID="aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06" Jul 2 11:18:29.284399 env[1560]: time="2024-07-02T11:18:29.284329246Z" level=info msg="RemoveContainer for \"aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06\"" Jul 2 11:18:29.289416 env[1560]: time="2024-07-02T11:18:29.289291503Z" level=info msg="RemoveContainer for \"aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06\" returns successfully" Jul 2 11:18:29.289804 kubelet[2591]: I0702 11:18:29.289713 2591 scope.go:117] "RemoveContainer" containerID="ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c" Jul 2 11:18:29.292316 env[1560]: time="2024-07-02T11:18:29.292208386Z" level=info msg="RemoveContainer for \"ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c\"" Jul 2 11:18:29.296504 env[1560]: time="2024-07-02T11:18:29.296381702Z" level=info msg="RemoveContainer for \"ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c\" returns successfully" Jul 2 11:18:29.296849 kubelet[2591]: I0702 11:18:29.296755 2591 scope.go:117] "RemoveContainer" containerID="9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab" Jul 2 11:18:29.297422 env[1560]: time="2024-07-02T11:18:29.297196258Z" level=error msg="ContainerStatus for \"9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab\": not found" Jul 2 11:18:29.297800 kubelet[2591]: E0702 11:18:29.297695 2591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab\": not found" containerID="9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab" Jul 2 11:18:29.298010 kubelet[2591]: I0702 11:18:29.297781 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab"} err="failed to get container status \"9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b7e54f77489c61135904633b20571b21a33030d2eea350c30f115b8b24740ab\": not found" Jul 2 11:18:29.298010 kubelet[2591]: I0702 11:18:29.297968 2591 scope.go:117] "RemoveContainer" containerID="c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc" Jul 2 11:18:29.298665 env[1560]: time="2024-07-02T11:18:29.298517911Z" level=error msg="ContainerStatus for \"c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc\": not found" Jul 2 11:18:29.298989 kubelet[2591]: E0702 11:18:29.298928 2591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc\": not found" containerID="c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc" Jul 2 11:18:29.299178 kubelet[2591]: I0702 11:18:29.299001 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc"} err="failed to get container status \"c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"c78b435ced197582c21c6611ed00860ba6c9bf32e68f49a268be910855ef67cc\": not found" Jul 2 11:18:29.299178 kubelet[2591]: I0702 11:18:29.299049 2591 scope.go:117] "RemoveContainer" containerID="5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7" Jul 2 11:18:29.299736 env[1560]: time="2024-07-02T11:18:29.299549014Z" level=error msg="ContainerStatus for \"5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7\": not found" Jul 2 11:18:29.300072 kubelet[2591]: E0702 11:18:29.299969 2591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7\": not found" containerID="5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7" Jul 2 11:18:29.300072 kubelet[2591]: I0702 11:18:29.300040 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7"} err="failed to get container status \"5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ec10c8fa3bb1dddd96ab8c11b299ca777451d2705bc205bd467c45177e790e7\": not found" Jul 2 11:18:29.300412 kubelet[2591]: I0702 11:18:29.300107 2591 scope.go:117] "RemoveContainer" containerID="aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06" Jul 2 11:18:29.300786 env[1560]: time="2024-07-02T11:18:29.300609900Z" level=error msg="ContainerStatus for \"aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06\": not found" Jul 2 11:18:29.301086 kubelet[2591]: E0702 11:18:29.301007 2591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06\": not found" containerID="aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06" Jul 2 11:18:29.301234 kubelet[2591]: I0702 11:18:29.301079 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06"} err="failed to get container status \"aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06\": rpc error: code = NotFound desc = an error occurred when try to find container \"aaafddbc256b8f79b2212bfa3295199a254f9183de29e2b823604579d158ee06\": not found" Jul 2 11:18:29.301234 kubelet[2591]: I0702 11:18:29.301127 2591 scope.go:117] "RemoveContainer" containerID="ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c" Jul 2 11:18:29.301823 env[1560]: time="2024-07-02T11:18:29.301630884Z" level=error msg="ContainerStatus for \"ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c\": not found" Jul 2 11:18:29.302041 kubelet[2591]: E0702 11:18:29.301987 2591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c\": not found" containerID="ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c" Jul 2 11:18:29.302183 kubelet[2591]: I0702 11:18:29.302042 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c"} err="failed to get container status \"ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce2cdde12bb30bfa80cab583b0e523cc48ca12c992c660f538dd5543ba8d782c\": not found" Jul 2 11:18:29.302183 kubelet[2591]: I0702 11:18:29.302098 2591 scope.go:117] "RemoveContainer" containerID="d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5" Jul 2 11:18:29.304694 env[1560]: time="2024-07-02T11:18:29.304584437Z" level=info msg="RemoveContainer for \"d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5\"" Jul 2 11:18:29.308571 env[1560]: time="2024-07-02T11:18:29.308465515Z" level=info msg="RemoveContainer for \"d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5\" returns successfully" Jul 2 11:18:29.308906 kubelet[2591]: I0702 11:18:29.308825 2591 scope.go:117] "RemoveContainer" containerID="d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5" Jul 2 11:18:29.309465 env[1560]: time="2024-07-02T11:18:29.309312452Z" level=error msg="ContainerStatus for \"d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5\": not found" Jul 2 11:18:29.309771 kubelet[2591]: E0702 11:18:29.309677 2591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5\": not found" containerID="d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5" Jul 2 11:18:29.309771 kubelet[2591]: I0702 11:18:29.309736 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5"} err="failed to get container status \"d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"d52e1c7462ef1ef8777be8036935afcb17b15dad36d09a52737b9f2faa0783c5\": not found" Jul 2 11:18:29.367678 kubelet[2591]: I0702 11:18:29.367562 2591 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-t8rpk\" (UniqueName: \"kubernetes.io/projected/2a2a4868-17ec-4ab6-a62f-61e95c751c96-kube-api-access-t8rpk\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:29.367678 kubelet[2591]: I0702 11:18:29.367635 2591 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a2a4868-17ec-4ab6-a62f-61e95c751c96-clustermesh-secrets\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:29.367678 kubelet[2591]: I0702 11:18:29.367669 2591 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a2a4868-17ec-4ab6-a62f-61e95c751c96-cilium-config-path\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:29.367678 kubelet[2591]: I0702 11:18:29.367697 2591 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a2a4868-17ec-4ab6-a62f-61e95c751c96-hubble-tls\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:29.368281 kubelet[2591]: I0702 11:18:29.367724 2591 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-lib-modules\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:29.368281 kubelet[2591]: I0702 11:18:29.367751 2591 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a2a4868-17ec-4ab6-a62f-61e95c751c96-cni-path\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:29.561899 systemd[1]: Removed slice kubepods-burstable-pod2a2a4868_17ec_4ab6_a62f_61e95c751c96.slice. Jul 2 11:18:29.561965 systemd[1]: kubepods-burstable-pod2a2a4868_17ec_4ab6_a62f_61e95c751c96.slice: Consumed 7.036s CPU time. Jul 2 11:18:29.827423 kubelet[2591]: I0702 11:18:29.827402 2591 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a2a4868-17ec-4ab6-a62f-61e95c751c96" path="/var/lib/kubelet/pods/2a2a4868-17ec-4ab6-a62f-61e95c751c96/volumes" Jul 2 11:18:29.827833 kubelet[2591]: I0702 11:18:29.827821 2591 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9490ced8-96d8-469f-a696-8effe9f4ecfd" path="/var/lib/kubelet/pods/9490ced8-96d8-469f-a696-8effe9f4ecfd/volumes" Jul 2 11:18:29.905594 systemd[1]: var-lib-kubelet-pods-9490ced8\x2d96d8\x2d469f\x2da696\x2d8effe9f4ecfd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d28nd2.mount: Deactivated successfully. Jul 2 11:18:29.905667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2-rootfs.mount: Deactivated successfully. Jul 2 11:18:29.905721 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2-shm.mount: Deactivated successfully. Jul 2 11:18:29.905771 systemd[1]: var-lib-kubelet-pods-2a2a4868\x2d17ec\x2d4ab6\x2da62f\x2d61e95c751c96-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt8rpk.mount: Deactivated successfully. Jul 2 11:18:29.905822 systemd[1]: var-lib-kubelet-pods-2a2a4868\x2d17ec\x2d4ab6\x2da62f\x2d61e95c751c96-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 11:18:29.905875 systemd[1]: var-lib-kubelet-pods-2a2a4868\x2d17ec\x2d4ab6\x2da62f\x2d61e95c751c96-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 11:18:30.863549 sshd[4623]: pam_unix(sshd:session): session closed for user core Jul 2 11:18:30.868167 systemd[1]: sshd@24-145.40.90.137:22-139.178.68.195:34272.service: Deactivated successfully. Jul 2 11:18:30.869254 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 11:18:30.870290 systemd-logind[1551]: Session 25 logged out. Waiting for processes to exit. Jul 2 11:18:30.870982 systemd[1]: Started sshd@25-145.40.90.137:22-139.178.68.195:34278.service. Jul 2 11:18:30.871450 systemd-logind[1551]: Removed session 25. Jul 2 11:18:30.899239 sshd[4798]: Accepted publickey for core from 139.178.68.195 port 34278 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:18:30.900055 sshd[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:18:30.902879 systemd-logind[1551]: New session 26 of user core. Jul 2 11:18:30.903428 systemd[1]: Started session-26.scope. Jul 2 11:18:30.981463 kubelet[2591]: E0702 11:18:30.981438 2591 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 11:18:31.333095 sshd[4798]: pam_unix(sshd:session): session closed for user core Jul 2 11:18:31.335684 systemd[1]: sshd@25-145.40.90.137:22-139.178.68.195:34278.service: Deactivated successfully. Jul 2 11:18:31.336232 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 11:18:31.336649 systemd-logind[1551]: Session 26 logged out. Waiting for processes to exit. Jul 2 11:18:31.337575 systemd[1]: Started sshd@26-145.40.90.137:22-139.178.68.195:34290.service. Jul 2 11:18:31.338318 systemd-logind[1551]: Removed session 26. Jul 2 11:18:31.338467 kubelet[2591]: I0702 11:18:31.338447 2591 topology_manager.go:215] "Topology Admit Handler" podUID="302610bf-19d0-4eb5-aa8b-6230f7f6b6e2" podNamespace="kube-system" podName="cilium-wtbd9" Jul 2 11:18:31.338561 kubelet[2591]: E0702 11:18:31.338502 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9490ced8-96d8-469f-a696-8effe9f4ecfd" containerName="cilium-operator" Jul 2 11:18:31.338561 kubelet[2591]: E0702 11:18:31.338512 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2a2a4868-17ec-4ab6-a62f-61e95c751c96" containerName="clean-cilium-state" Jul 2 11:18:31.338561 kubelet[2591]: E0702 11:18:31.338518 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2a2a4868-17ec-4ab6-a62f-61e95c751c96" containerName="cilium-agent" Jul 2 11:18:31.338561 kubelet[2591]: E0702 11:18:31.338525 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2a2a4868-17ec-4ab6-a62f-61e95c751c96" containerName="mount-cgroup" Jul 2 11:18:31.338561 kubelet[2591]: E0702 11:18:31.338532 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2a2a4868-17ec-4ab6-a62f-61e95c751c96" containerName="apply-sysctl-overwrites" Jul 2 11:18:31.338561 kubelet[2591]: E0702 11:18:31.338537 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2a2a4868-17ec-4ab6-a62f-61e95c751c96" containerName="mount-bpf-fs" Jul 2 11:18:31.338561 kubelet[2591]: I0702 11:18:31.338556 2591 memory_manager.go:354] "RemoveStaleState removing state" podUID="9490ced8-96d8-469f-a696-8effe9f4ecfd" containerName="cilium-operator" Jul 2 11:18:31.338561 kubelet[2591]: I0702 11:18:31.338561 2591 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a2a4868-17ec-4ab6-a62f-61e95c751c96" containerName="cilium-agent" Jul 2 11:18:31.343690 systemd[1]: Created slice kubepods-burstable-pod302610bf_19d0_4eb5_aa8b_6230f7f6b6e2.slice. Jul 2 11:18:31.377909 sshd[4821]: Accepted publickey for core from 139.178.68.195 port 34290 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:18:31.378798 sshd[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:18:31.381252 systemd-logind[1551]: New session 27 of user core. Jul 2 11:18:31.381705 systemd[1]: Started session-27.scope. Jul 2 11:18:31.479379 kubelet[2591]: I0702 11:18:31.479346 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-bpf-maps\") pod \"cilium-wtbd9\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " pod="kube-system/cilium-wtbd9" Jul 2 11:18:31.479519 kubelet[2591]: I0702 11:18:31.479385 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cilium-cgroup\") pod \"cilium-wtbd9\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " pod="kube-system/cilium-wtbd9" Jul 2 11:18:31.479519 kubelet[2591]: I0702 11:18:31.479424 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-etc-cni-netd\") pod \"cilium-wtbd9\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " pod="kube-system/cilium-wtbd9" Jul 2 11:18:31.479519 kubelet[2591]: I0702 11:18:31.479449 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-clustermesh-secrets\") pod \"cilium-wtbd9\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " pod="kube-system/cilium-wtbd9" Jul 2 11:18:31.479519 kubelet[2591]: I0702 11:18:31.479475 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-hubble-tls\") pod \"cilium-wtbd9\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " pod="kube-system/cilium-wtbd9" Jul 2 11:18:31.479519 kubelet[2591]: I0702 11:18:31.479506 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p5bs\" (UniqueName: \"kubernetes.io/projected/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-kube-api-access-7p5bs\") pod \"cilium-wtbd9\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " pod="kube-system/cilium-wtbd9" Jul 2 11:18:31.479704 kubelet[2591]: I0702 11:18:31.479523 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cilium-config-path\") pod \"cilium-wtbd9\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " pod="kube-system/cilium-wtbd9" Jul 2 11:18:31.479704 kubelet[2591]: I0702 11:18:31.479541 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cilium-ipsec-secrets\") pod \"cilium-wtbd9\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " pod="kube-system/cilium-wtbd9" Jul 2 11:18:31.479704 kubelet[2591]: I0702 11:18:31.479568 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cilium-run\") pod \"cilium-wtbd9\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " pod="kube-system/cilium-wtbd9" Jul 2 11:18:31.479704 kubelet[2591]: I0702 11:18:31.479585 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-xtables-lock\") pod \"cilium-wtbd9\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " pod="kube-system/cilium-wtbd9" Jul 2 11:18:31.479704 kubelet[2591]: I0702 11:18:31.479603 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-lib-modules\") pod \"cilium-wtbd9\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " pod="kube-system/cilium-wtbd9" Jul 2 11:18:31.479704 kubelet[2591]: I0702 11:18:31.479621 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-hostproc\") pod \"cilium-wtbd9\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " pod="kube-system/cilium-wtbd9" Jul 2 11:18:31.479897 kubelet[2591]: I0702 11:18:31.479648 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cni-path\") pod \"cilium-wtbd9\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " pod="kube-system/cilium-wtbd9" Jul 2 11:18:31.479897 kubelet[2591]: I0702 11:18:31.479665 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-host-proc-sys-net\") pod \"cilium-wtbd9\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " pod="kube-system/cilium-wtbd9" Jul 2 11:18:31.479897 kubelet[2591]: I0702 11:18:31.479728 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-host-proc-sys-kernel\") pod \"cilium-wtbd9\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " pod="kube-system/cilium-wtbd9" Jul 2 11:18:31.491409 sshd[4821]: pam_unix(sshd:session): session closed for user core Jul 2 11:18:31.494124 systemd[1]: sshd@26-145.40.90.137:22-139.178.68.195:34290.service: Deactivated successfully. Jul 2 11:18:31.494756 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 11:18:31.495311 systemd-logind[1551]: Session 27 logged out. Waiting for processes to exit. Jul 2 11:18:31.496365 systemd[1]: Started sshd@27-145.40.90.137:22-139.178.68.195:34296.service. Jul 2 11:18:31.497169 systemd-logind[1551]: Removed session 27. Jul 2 11:18:31.501400 kubelet[2591]: E0702 11:18:31.501353 2591 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-7p5bs lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-wtbd9" podUID="302610bf-19d0-4eb5-aa8b-6230f7f6b6e2" Jul 2 11:18:31.552360 sshd[4846]: Accepted publickey for core from 139.178.68.195 port 34296 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:18:31.554222 sshd[4846]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:18:31.559116 systemd-logind[1551]: New session 28 of user core. Jul 2 11:18:31.560440 systemd[1]: Started session-28.scope. Jul 2 11:18:32.386736 kubelet[2591]: I0702 11:18:32.386576 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cilium-cgroup\") pod \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " Jul 2 11:18:32.386736 kubelet[2591]: I0702 11:18:32.386678 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cilium-ipsec-secrets\") pod \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " Jul 2 11:18:32.386736 kubelet[2591]: I0702 11:18:32.386725 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2" (UID: "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:32.387844 kubelet[2591]: I0702 11:18:32.386740 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-etc-cni-netd\") pod \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " Jul 2 11:18:32.387844 kubelet[2591]: I0702 11:18:32.386802 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2" (UID: "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:32.387844 kubelet[2591]: I0702 11:18:32.386883 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-lib-modules\") pod \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " Jul 2 11:18:32.387844 kubelet[2591]: I0702 11:18:32.386943 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-host-proc-sys-net\") pod \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " Jul 2 11:18:32.387844 kubelet[2591]: I0702 11:18:32.387003 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p5bs\" (UniqueName: \"kubernetes.io/projected/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-kube-api-access-7p5bs\") pod \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " Jul 2 11:18:32.387844 kubelet[2591]: I0702 11:18:32.387067 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-clustermesh-secrets\") pod \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " Jul 2 11:18:32.388448 kubelet[2591]: I0702 11:18:32.386990 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2" (UID: "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:32.388448 kubelet[2591]: I0702 11:18:32.387053 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2" (UID: "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:32.388448 kubelet[2591]: I0702 11:18:32.387155 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-hubble-tls\") pod \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " Jul 2 11:18:32.388448 kubelet[2591]: I0702 11:18:32.387249 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cilium-config-path\") pod \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " Jul 2 11:18:32.388448 kubelet[2591]: I0702 11:18:32.387329 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cni-path\") pod \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " Jul 2 11:18:32.388448 kubelet[2591]: I0702 11:18:32.387407 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-bpf-maps\") pod \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " Jul 2 11:18:32.389083 kubelet[2591]: I0702 11:18:32.387516 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cilium-run\") pod \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " Jul 2 11:18:32.389083 kubelet[2591]: I0702 11:18:32.387602 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-hostproc\") pod \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " Jul 2 11:18:32.389083 kubelet[2591]: I0702 11:18:32.387511 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cni-path" (OuterVolumeSpecName: "cni-path") pod "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2" (UID: "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:32.389083 kubelet[2591]: I0702 11:18:32.387576 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2" (UID: "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:32.389083 kubelet[2591]: I0702 11:18:32.387635 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2" (UID: "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:32.389611 kubelet[2591]: I0702 11:18:32.387729 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-xtables-lock\") pod \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " Jul 2 11:18:32.389611 kubelet[2591]: I0702 11:18:32.387753 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-hostproc" (OuterVolumeSpecName: "hostproc") pod "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2" (UID: "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:32.389611 kubelet[2591]: I0702 11:18:32.387787 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2" (UID: "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:32.389611 kubelet[2591]: I0702 11:18:32.387819 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-host-proc-sys-kernel\") pod \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\" (UID: \"302610bf-19d0-4eb5-aa8b-6230f7f6b6e2\") " Jul 2 11:18:32.389611 kubelet[2591]: I0702 11:18:32.387956 2591 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cilium-run\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:32.390102 kubelet[2591]: I0702 11:18:32.387945 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2" (UID: "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:18:32.390102 kubelet[2591]: I0702 11:18:32.388011 2591 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-hostproc\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:32.390102 kubelet[2591]: I0702 11:18:32.388042 2591 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-xtables-lock\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:32.390102 kubelet[2591]: I0702 11:18:32.388071 2591 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cilium-cgroup\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:32.390102 kubelet[2591]: I0702 11:18:32.388095 2591 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-etc-cni-netd\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:32.390102 kubelet[2591]: I0702 11:18:32.388118 2591 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-lib-modules\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:32.390102 kubelet[2591]: I0702 11:18:32.388142 2591 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-host-proc-sys-net\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:32.390900 kubelet[2591]: I0702 11:18:32.388170 2591 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cni-path\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:32.390900 kubelet[2591]: I0702 11:18:32.388206 2591 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-bpf-maps\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:32.392239 kubelet[2591]: I0702 11:18:32.392204 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2" (UID: "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 11:18:32.392732 kubelet[2591]: I0702 11:18:32.392703 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2" (UID: "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 11:18:32.392824 kubelet[2591]: I0702 11:18:32.392732 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2" (UID: "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 11:18:32.392824 kubelet[2591]: I0702 11:18:32.392783 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-kube-api-access-7p5bs" (OuterVolumeSpecName: "kube-api-access-7p5bs") pod "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2" (UID: "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2"). InnerVolumeSpecName "kube-api-access-7p5bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 11:18:32.392901 kubelet[2591]: I0702 11:18:32.392824 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2" (UID: "302610bf-19d0-4eb5-aa8b-6230f7f6b6e2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 11:18:32.393686 systemd[1]: var-lib-kubelet-pods-302610bf\x2d19d0\x2d4eb5\x2daa8b\x2d6230f7f6b6e2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7p5bs.mount: Deactivated successfully. Jul 2 11:18:32.393739 systemd[1]: var-lib-kubelet-pods-302610bf\x2d19d0\x2d4eb5\x2daa8b\x2d6230f7f6b6e2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 11:18:32.393774 systemd[1]: var-lib-kubelet-pods-302610bf\x2d19d0\x2d4eb5\x2daa8b\x2d6230f7f6b6e2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 11:18:32.393808 systemd[1]: var-lib-kubelet-pods-302610bf\x2d19d0\x2d4eb5\x2daa8b\x2d6230f7f6b6e2-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 11:18:32.488990 kubelet[2591]: I0702 11:18:32.488871 2591 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-7p5bs\" (UniqueName: \"kubernetes.io/projected/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-kube-api-access-7p5bs\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:32.488990 kubelet[2591]: I0702 11:18:32.488943 2591 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cilium-config-path\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:32.488990 kubelet[2591]: I0702 11:18:32.488973 2591 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-clustermesh-secrets\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:32.488990 kubelet[2591]: I0702 11:18:32.489002 2591 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-hubble-tls\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:32.489667 kubelet[2591]: I0702 11:18:32.489027 2591 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-host-proc-sys-kernel\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:32.489667 kubelet[2591]: I0702 11:18:32.489054 2591 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2-cilium-ipsec-secrets\") on node \"ci-3510.3.5-a-3a013adf74\" DevicePath \"\"" Jul 2 11:18:33.277690 systemd[1]: Removed slice kubepods-burstable-pod302610bf_19d0_4eb5_aa8b_6230f7f6b6e2.slice. Jul 2 11:18:33.304439 kubelet[2591]: I0702 11:18:33.304390 2591 topology_manager.go:215] "Topology Admit Handler" podUID="ba7ba213-61d4-4cc9-bbdb-728719264dda" podNamespace="kube-system" podName="cilium-t2kpx" Jul 2 11:18:33.307931 systemd[1]: Created slice kubepods-burstable-podba7ba213_61d4_4cc9_bbdb_728719264dda.slice. Jul 2 11:18:33.494772 kubelet[2591]: I0702 11:18:33.494748 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ba7ba213-61d4-4cc9-bbdb-728719264dda-cni-path\") pod \"cilium-t2kpx\" (UID: \"ba7ba213-61d4-4cc9-bbdb-728719264dda\") " pod="kube-system/cilium-t2kpx" Jul 2 11:18:33.494772 kubelet[2591]: I0702 11:18:33.494775 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62qkh\" (UniqueName: \"kubernetes.io/projected/ba7ba213-61d4-4cc9-bbdb-728719264dda-kube-api-access-62qkh\") pod \"cilium-t2kpx\" (UID: \"ba7ba213-61d4-4cc9-bbdb-728719264dda\") " pod="kube-system/cilium-t2kpx" Jul 2 11:18:33.495071 kubelet[2591]: I0702 11:18:33.494792 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba7ba213-61d4-4cc9-bbdb-728719264dda-etc-cni-netd\") pod \"cilium-t2kpx\" (UID: \"ba7ba213-61d4-4cc9-bbdb-728719264dda\") " pod="kube-system/cilium-t2kpx" Jul 2 11:18:33.495071 kubelet[2591]: I0702 11:18:33.494806 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ba7ba213-61d4-4cc9-bbdb-728719264dda-host-proc-sys-kernel\") pod \"cilium-t2kpx\" (UID: \"ba7ba213-61d4-4cc9-bbdb-728719264dda\") " pod="kube-system/cilium-t2kpx" Jul 2 11:18:33.495071 kubelet[2591]: I0702 11:18:33.494818 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ba7ba213-61d4-4cc9-bbdb-728719264dda-hubble-tls\") pod \"cilium-t2kpx\" (UID: \"ba7ba213-61d4-4cc9-bbdb-728719264dda\") " pod="kube-system/cilium-t2kpx" Jul 2 11:18:33.495071 kubelet[2591]: I0702 11:18:33.494831 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ba7ba213-61d4-4cc9-bbdb-728719264dda-cilium-run\") pod \"cilium-t2kpx\" (UID: \"ba7ba213-61d4-4cc9-bbdb-728719264dda\") " pod="kube-system/cilium-t2kpx" Jul 2 11:18:33.495071 kubelet[2591]: I0702 11:18:33.494842 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ba7ba213-61d4-4cc9-bbdb-728719264dda-bpf-maps\") pod \"cilium-t2kpx\" (UID: \"ba7ba213-61d4-4cc9-bbdb-728719264dda\") " pod="kube-system/cilium-t2kpx" Jul 2 11:18:33.495071 kubelet[2591]: I0702 11:18:33.494854 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ba7ba213-61d4-4cc9-bbdb-728719264dda-hostproc\") pod \"cilium-t2kpx\" (UID: \"ba7ba213-61d4-4cc9-bbdb-728719264dda\") " pod="kube-system/cilium-t2kpx" Jul 2 11:18:33.495208 kubelet[2591]: I0702 11:18:33.494864 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba7ba213-61d4-4cc9-bbdb-728719264dda-xtables-lock\") pod \"cilium-t2kpx\" (UID: \"ba7ba213-61d4-4cc9-bbdb-728719264dda\") " pod="kube-system/cilium-t2kpx" Jul 2 11:18:33.495208 kubelet[2591]: I0702 11:18:33.494877 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ba7ba213-61d4-4cc9-bbdb-728719264dda-cilium-ipsec-secrets\") pod \"cilium-t2kpx\" (UID: \"ba7ba213-61d4-4cc9-bbdb-728719264dda\") " pod="kube-system/cilium-t2kpx" Jul 2 11:18:33.495208 kubelet[2591]: I0702 11:18:33.494890 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba7ba213-61d4-4cc9-bbdb-728719264dda-lib-modules\") pod \"cilium-t2kpx\" (UID: \"ba7ba213-61d4-4cc9-bbdb-728719264dda\") " pod="kube-system/cilium-t2kpx" Jul 2 11:18:33.495208 kubelet[2591]: I0702 11:18:33.494902 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ba7ba213-61d4-4cc9-bbdb-728719264dda-host-proc-sys-net\") pod \"cilium-t2kpx\" (UID: \"ba7ba213-61d4-4cc9-bbdb-728719264dda\") " pod="kube-system/cilium-t2kpx" Jul 2 11:18:33.495208 kubelet[2591]: I0702 11:18:33.494922 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ba7ba213-61d4-4cc9-bbdb-728719264dda-clustermesh-secrets\") pod \"cilium-t2kpx\" (UID: \"ba7ba213-61d4-4cc9-bbdb-728719264dda\") " pod="kube-system/cilium-t2kpx" Jul 2 11:18:33.495317 kubelet[2591]: I0702 11:18:33.494934 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba7ba213-61d4-4cc9-bbdb-728719264dda-cilium-config-path\") pod \"cilium-t2kpx\" (UID: \"ba7ba213-61d4-4cc9-bbdb-728719264dda\") " pod="kube-system/cilium-t2kpx" Jul 2 11:18:33.495317 kubelet[2591]: I0702 11:18:33.494946 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ba7ba213-61d4-4cc9-bbdb-728719264dda-cilium-cgroup\") pod \"cilium-t2kpx\" (UID: \"ba7ba213-61d4-4cc9-bbdb-728719264dda\") " pod="kube-system/cilium-t2kpx" Jul 2 11:18:33.610148 env[1560]: time="2024-07-02T11:18:33.610123998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t2kpx,Uid:ba7ba213-61d4-4cc9-bbdb-728719264dda,Namespace:kube-system,Attempt:0,}" Jul 2 11:18:33.615323 env[1560]: time="2024-07-02T11:18:33.615264802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:18:33.615323 env[1560]: time="2024-07-02T11:18:33.615287245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:18:33.615323 env[1560]: time="2024-07-02T11:18:33.615299849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:18:33.615445 env[1560]: time="2024-07-02T11:18:33.615402232Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/060a6a662abd519cfb5200e6d94c098baaa4608dba4e587c8888ec4b0bfdf7f4 pid=4885 runtime=io.containerd.runc.v2 Jul 2 11:18:33.621002 systemd[1]: Started cri-containerd-060a6a662abd519cfb5200e6d94c098baaa4608dba4e587c8888ec4b0bfdf7f4.scope. Jul 2 11:18:33.632038 env[1560]: time="2024-07-02T11:18:33.631986851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t2kpx,Uid:ba7ba213-61d4-4cc9-bbdb-728719264dda,Namespace:kube-system,Attempt:0,} returns sandbox id \"060a6a662abd519cfb5200e6d94c098baaa4608dba4e587c8888ec4b0bfdf7f4\"" Jul 2 11:18:33.633197 env[1560]: time="2024-07-02T11:18:33.633183281Z" level=info msg="CreateContainer within sandbox \"060a6a662abd519cfb5200e6d94c098baaa4608dba4e587c8888ec4b0bfdf7f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 11:18:33.637976 env[1560]: time="2024-07-02T11:18:33.637923890Z" level=info msg="CreateContainer within sandbox \"060a6a662abd519cfb5200e6d94c098baaa4608dba4e587c8888ec4b0bfdf7f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6f6dac308891494ea9b4467f1234e7bb04982a04dfb35d8bc44ef326e0f933cc\"" Jul 2 11:18:33.638184 env[1560]: time="2024-07-02T11:18:33.638150792Z" level=info msg="StartContainer for \"6f6dac308891494ea9b4467f1234e7bb04982a04dfb35d8bc44ef326e0f933cc\"" Jul 2 11:18:33.645491 systemd[1]: Started cri-containerd-6f6dac308891494ea9b4467f1234e7bb04982a04dfb35d8bc44ef326e0f933cc.scope. Jul 2 11:18:33.657813 env[1560]: time="2024-07-02T11:18:33.657789493Z" level=info msg="StartContainer for \"6f6dac308891494ea9b4467f1234e7bb04982a04dfb35d8bc44ef326e0f933cc\" returns successfully" Jul 2 11:18:33.662510 systemd[1]: cri-containerd-6f6dac308891494ea9b4467f1234e7bb04982a04dfb35d8bc44ef326e0f933cc.scope: Deactivated successfully. Jul 2 11:18:33.691089 env[1560]: time="2024-07-02T11:18:33.691050670Z" level=info msg="shim disconnected" id=6f6dac308891494ea9b4467f1234e7bb04982a04dfb35d8bc44ef326e0f933cc Jul 2 11:18:33.691229 env[1560]: time="2024-07-02T11:18:33.691089865Z" level=warning msg="cleaning up after shim disconnected" id=6f6dac308891494ea9b4467f1234e7bb04982a04dfb35d8bc44ef326e0f933cc namespace=k8s.io Jul 2 11:18:33.691229 env[1560]: time="2024-07-02T11:18:33.691099176Z" level=info msg="cleaning up dead shim" Jul 2 11:18:33.697132 env[1560]: time="2024-07-02T11:18:33.697063428Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:18:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4968 runtime=io.containerd.runc.v2\n" Jul 2 11:18:33.826085 kubelet[2591]: E0702 11:18:33.825959 2591 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-d7m5l" podUID="8f66db10-f16f-4afb-817c-850ce6633381" Jul 2 11:18:33.832495 kubelet[2591]: I0702 11:18:33.832362 2591 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="302610bf-19d0-4eb5-aa8b-6230f7f6b6e2" path="/var/lib/kubelet/pods/302610bf-19d0-4eb5-aa8b-6230f7f6b6e2/volumes" Jul 2 11:18:34.281274 env[1560]: time="2024-07-02T11:18:34.281132074Z" level=info msg="CreateContainer within sandbox \"060a6a662abd519cfb5200e6d94c098baaa4608dba4e587c8888ec4b0bfdf7f4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 11:18:34.311814 env[1560]: time="2024-07-02T11:18:34.311702770Z" level=info msg="CreateContainer within sandbox \"060a6a662abd519cfb5200e6d94c098baaa4608dba4e587c8888ec4b0bfdf7f4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6e55ed52de0b40843076c36048f516efa3666a30e43bd83b889c09fd2eb56385\"" Jul 2 11:18:34.312772 env[1560]: time="2024-07-02T11:18:34.312696514Z" level=info msg="StartContainer for \"6e55ed52de0b40843076c36048f516efa3666a30e43bd83b889c09fd2eb56385\"" Jul 2 11:18:34.339869 systemd[1]: Started cri-containerd-6e55ed52de0b40843076c36048f516efa3666a30e43bd83b889c09fd2eb56385.scope. Jul 2 11:18:34.378910 env[1560]: time="2024-07-02T11:18:34.378803901Z" level=info msg="StartContainer for \"6e55ed52de0b40843076c36048f516efa3666a30e43bd83b889c09fd2eb56385\" returns successfully" Jul 2 11:18:34.392197 systemd[1]: cri-containerd-6e55ed52de0b40843076c36048f516efa3666a30e43bd83b889c09fd2eb56385.scope: Deactivated successfully. Jul 2 11:18:34.421433 env[1560]: time="2024-07-02T11:18:34.421307082Z" level=info msg="shim disconnected" id=6e55ed52de0b40843076c36048f516efa3666a30e43bd83b889c09fd2eb56385 Jul 2 11:18:34.421433 env[1560]: time="2024-07-02T11:18:34.421397116Z" level=warning msg="cleaning up after shim disconnected" id=6e55ed52de0b40843076c36048f516efa3666a30e43bd83b889c09fd2eb56385 namespace=k8s.io Jul 2 11:18:34.421433 env[1560]: time="2024-07-02T11:18:34.421420652Z" level=info msg="cleaning up dead shim" Jul 2 11:18:34.433173 env[1560]: time="2024-07-02T11:18:34.433090013Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:18:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5032 runtime=io.containerd.runc.v2\n" Jul 2 11:18:35.279976 env[1560]: time="2024-07-02T11:18:35.279932535Z" level=info msg="CreateContainer within sandbox \"060a6a662abd519cfb5200e6d94c098baaa4608dba4e587c8888ec4b0bfdf7f4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 11:18:35.288697 env[1560]: time="2024-07-02T11:18:35.288672756Z" level=info msg="CreateContainer within sandbox \"060a6a662abd519cfb5200e6d94c098baaa4608dba4e587c8888ec4b0bfdf7f4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4d3f77b0535c3f030f491ef4e9d3d706b58ad335a3f988c152e7afc3ebf634a7\"" Jul 2 11:18:35.289175 env[1560]: time="2024-07-02T11:18:35.289068420Z" level=info msg="StartContainer for \"4d3f77b0535c3f030f491ef4e9d3d706b58ad335a3f988c152e7afc3ebf634a7\"" Jul 2 11:18:35.299304 systemd[1]: Started cri-containerd-4d3f77b0535c3f030f491ef4e9d3d706b58ad335a3f988c152e7afc3ebf634a7.scope. Jul 2 11:18:35.313301 env[1560]: time="2024-07-02T11:18:35.313247518Z" level=info msg="StartContainer for \"4d3f77b0535c3f030f491ef4e9d3d706b58ad335a3f988c152e7afc3ebf634a7\" returns successfully" Jul 2 11:18:35.315263 systemd[1]: cri-containerd-4d3f77b0535c3f030f491ef4e9d3d706b58ad335a3f988c152e7afc3ebf634a7.scope: Deactivated successfully. Jul 2 11:18:35.326322 env[1560]: time="2024-07-02T11:18:35.326296540Z" level=info msg="shim disconnected" id=4d3f77b0535c3f030f491ef4e9d3d706b58ad335a3f988c152e7afc3ebf634a7 Jul 2 11:18:35.326322 env[1560]: time="2024-07-02T11:18:35.326321497Z" level=warning msg="cleaning up after shim disconnected" id=4d3f77b0535c3f030f491ef4e9d3d706b58ad335a3f988c152e7afc3ebf634a7 namespace=k8s.io Jul 2 11:18:35.326433 env[1560]: time="2024-07-02T11:18:35.326326895Z" level=info msg="cleaning up dead shim" Jul 2 11:18:35.329975 env[1560]: time="2024-07-02T11:18:35.329929000Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:18:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5087 runtime=io.containerd.runc.v2\n" Jul 2 11:18:35.465361 kubelet[2591]: I0702 11:18:35.465265 2591 setters.go:580] "Node became not ready" node="ci-3510.3.5-a-3a013adf74" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T11:18:35Z","lastTransitionTime":"2024-07-02T11:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 11:18:35.602185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d3f77b0535c3f030f491ef4e9d3d706b58ad335a3f988c152e7afc3ebf634a7-rootfs.mount: Deactivated successfully. Jul 2 11:18:35.826959 kubelet[2591]: E0702 11:18:35.826872 2591 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-d7m5l" podUID="8f66db10-f16f-4afb-817c-850ce6633381" Jul 2 11:18:35.832215 env[1560]: time="2024-07-02T11:18:35.832078362Z" level=info msg="StopPodSandbox for \"5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe\"" Jul 2 11:18:35.832600 env[1560]: time="2024-07-02T11:18:35.832377705Z" level=info msg="TearDown network for sandbox \"5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe\" successfully" Jul 2 11:18:35.832600 env[1560]: time="2024-07-02T11:18:35.832566984Z" level=info msg="StopPodSandbox for \"5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe\" returns successfully" Jul 2 11:18:35.833637 env[1560]: time="2024-07-02T11:18:35.833523606Z" level=info msg="RemovePodSandbox for \"5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe\"" Jul 2 11:18:35.833853 env[1560]: time="2024-07-02T11:18:35.833609451Z" level=info msg="Forcibly stopping sandbox \"5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe\"" Jul 2 11:18:35.834047 env[1560]: time="2024-07-02T11:18:35.833791465Z" level=info msg="TearDown network for sandbox \"5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe\" successfully" Jul 2 11:18:35.838576 env[1560]: time="2024-07-02T11:18:35.838494356Z" level=info msg="RemovePodSandbox \"5d98d326923a7333b8c09432f5afc4269e2a133d0ca6a7fcd8a9782a94af22fe\" returns successfully" Jul 2 11:18:35.839515 env[1560]: time="2024-07-02T11:18:35.839430064Z" level=info msg="StopPodSandbox for \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\"" Jul 2 11:18:35.839775 env[1560]: time="2024-07-02T11:18:35.839676598Z" level=info msg="TearDown network for sandbox \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\" successfully" Jul 2 11:18:35.839997 env[1560]: time="2024-07-02T11:18:35.839770699Z" level=info msg="StopPodSandbox for \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\" returns successfully" Jul 2 11:18:35.840558 env[1560]: time="2024-07-02T11:18:35.840454047Z" level=info msg="RemovePodSandbox for \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\"" Jul 2 11:18:35.840788 env[1560]: time="2024-07-02T11:18:35.840562441Z" level=info msg="Forcibly stopping sandbox \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\"" Jul 2 11:18:35.840982 env[1560]: time="2024-07-02T11:18:35.840786521Z" level=info msg="TearDown network for sandbox \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\" successfully" Jul 2 11:18:35.848877 env[1560]: time="2024-07-02T11:18:35.848755970Z" level=info msg="RemovePodSandbox \"c7f492102453fc675d04ba1efa64e9c9817279a7f818a8399d105e230ca3add2\" returns successfully" Jul 2 11:18:35.983546 kubelet[2591]: E0702 11:18:35.983291 2591 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 11:18:36.297620 env[1560]: time="2024-07-02T11:18:36.297375404Z" level=info msg="CreateContainer within sandbox \"060a6a662abd519cfb5200e6d94c098baaa4608dba4e587c8888ec4b0bfdf7f4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 11:18:36.314692 env[1560]: time="2024-07-02T11:18:36.314567062Z" level=info msg="CreateContainer within sandbox \"060a6a662abd519cfb5200e6d94c098baaa4608dba4e587c8888ec4b0bfdf7f4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3333faa6bebe52c9e7097b0bb7ee1160b391bcf6058498b37527a1ef06944c39\"" Jul 2 11:18:36.315597 env[1560]: time="2024-07-02T11:18:36.315464042Z" level=info msg="StartContainer for \"3333faa6bebe52c9e7097b0bb7ee1160b391bcf6058498b37527a1ef06944c39\"" Jul 2 11:18:36.325281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3382872415.mount: Deactivated successfully. Jul 2 11:18:36.337500 systemd[1]: Started cri-containerd-3333faa6bebe52c9e7097b0bb7ee1160b391bcf6058498b37527a1ef06944c39.scope. Jul 2 11:18:36.349234 env[1560]: time="2024-07-02T11:18:36.349208687Z" level=info msg="StartContainer for \"3333faa6bebe52c9e7097b0bb7ee1160b391bcf6058498b37527a1ef06944c39\" returns successfully" Jul 2 11:18:36.349565 systemd[1]: cri-containerd-3333faa6bebe52c9e7097b0bb7ee1160b391bcf6058498b37527a1ef06944c39.scope: Deactivated successfully. Jul 2 11:18:36.382077 env[1560]: time="2024-07-02T11:18:36.382044716Z" level=info msg="shim disconnected" id=3333faa6bebe52c9e7097b0bb7ee1160b391bcf6058498b37527a1ef06944c39 Jul 2 11:18:36.382077 env[1560]: time="2024-07-02T11:18:36.382074693Z" level=warning msg="cleaning up after shim disconnected" id=3333faa6bebe52c9e7097b0bb7ee1160b391bcf6058498b37527a1ef06944c39 namespace=k8s.io Jul 2 11:18:36.382208 env[1560]: time="2024-07-02T11:18:36.382081171Z" level=info msg="cleaning up dead shim" Jul 2 11:18:36.386072 env[1560]: time="2024-07-02T11:18:36.386030715Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:18:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5141 runtime=io.containerd.runc.v2\n" Jul 2 11:18:36.602129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3333faa6bebe52c9e7097b0bb7ee1160b391bcf6058498b37527a1ef06944c39-rootfs.mount: Deactivated successfully. Jul 2 11:18:37.302221 env[1560]: time="2024-07-02T11:18:37.302069288Z" level=info msg="CreateContainer within sandbox \"060a6a662abd519cfb5200e6d94c098baaa4608dba4e587c8888ec4b0bfdf7f4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 11:18:37.320823 env[1560]: time="2024-07-02T11:18:37.320725946Z" level=info msg="CreateContainer within sandbox \"060a6a662abd519cfb5200e6d94c098baaa4608dba4e587c8888ec4b0bfdf7f4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fd2c08fead567847c21ed7c9e40cc14636ebcf5c8ed78023c5e315839d7b19e7\"" Jul 2 11:18:37.321736 env[1560]: time="2024-07-02T11:18:37.321663417Z" level=info msg="StartContainer for \"fd2c08fead567847c21ed7c9e40cc14636ebcf5c8ed78023c5e315839d7b19e7\"" Jul 2 11:18:37.323853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount734498328.mount: Deactivated successfully. Jul 2 11:18:37.331770 systemd[1]: Started cri-containerd-fd2c08fead567847c21ed7c9e40cc14636ebcf5c8ed78023c5e315839d7b19e7.scope. Jul 2 11:18:37.344756 env[1560]: time="2024-07-02T11:18:37.344700009Z" level=info msg="StartContainer for \"fd2c08fead567847c21ed7c9e40cc14636ebcf5c8ed78023c5e315839d7b19e7\" returns successfully" Jul 2 11:18:37.493487 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 11:18:37.826475 kubelet[2591]: E0702 11:18:37.826332 2591 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-d7m5l" podUID="8f66db10-f16f-4afb-817c-850ce6633381" Jul 2 11:18:38.315743 kubelet[2591]: I0702 11:18:38.315630 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t2kpx" podStartSLOduration=5.3156182229999995 podStartE2EDuration="5.315618223s" podCreationTimestamp="2024-07-02 11:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:18:38.315550671 +0000 UTC m=+482.605120148" watchObservedRunningTime="2024-07-02 11:18:38.315618223 +0000 UTC m=+482.605187697" Jul 2 11:18:39.826119 kubelet[2591]: E0702 11:18:39.826021 2591 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-d7m5l" podUID="8f66db10-f16f-4afb-817c-850ce6633381" Jul 2 11:18:40.580823 systemd-networkd[1325]: lxc_health: Link UP Jul 2 11:18:40.603514 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 11:18:40.603717 systemd-networkd[1325]: lxc_health: Gained carrier Jul 2 11:18:41.918595 systemd-networkd[1325]: lxc_health: Gained IPv6LL Jul 2 11:18:46.139116 sshd[4846]: pam_unix(sshd:session): session closed for user core Jul 2 11:18:46.140714 systemd[1]: sshd@27-145.40.90.137:22-139.178.68.195:34296.service: Deactivated successfully. Jul 2 11:18:46.141215 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 11:18:46.141572 systemd-logind[1551]: Session 28 logged out. Waiting for processes to exit. Jul 2 11:18:46.142110 systemd-logind[1551]: Removed session 28.