Dec 13 02:01:08.092052 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:01:08.092095 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:01:08.092113 kernel: BIOS-provided physical RAM map: Dec 13 02:01:08.092126 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 02:01:08.092139 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 02:01:08.092152 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 02:01:08.092170 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 02:01:08.092183 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 02:01:08.092196 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd276fff] usable Dec 13 02:01:08.092354 kernel: BIOS-e820: [mem 0x00000000bd277000-0x00000000bd280fff] ACPI data Dec 13 02:01:08.092368 kernel: BIOS-e820: [mem 0x00000000bd281000-0x00000000bf8ecfff] usable Dec 13 02:01:08.092382 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Dec 13 02:01:08.092395 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 02:01:08.092409 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 02:01:08.092555 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 02:01:08.092571 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 02:01:08.092585 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 02:01:08.092776 kernel: NX (Execute Disable) protection: active Dec 13 02:01:08.092793 kernel: efi: EFI v2.70 by EDK II Dec 13 02:01:08.092809 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd277018 Dec 13 02:01:08.092824 kernel: random: crng init done Dec 13 02:01:08.092839 kernel: SMBIOS 2.4 present. Dec 13 02:01:08.092936 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 02:01:08.092951 kernel: Hypervisor detected: KVM Dec 13 02:01:08.092964 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:01:08.092979 kernel: kvm-clock: cpu 0, msr 1c019b001, primary cpu clock Dec 13 02:01:08.092993 kernel: kvm-clock: using sched offset of 12902082667 cycles Dec 13 02:01:08.093007 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:01:08.093023 kernel: tsc: Detected 2299.998 MHz processor Dec 13 02:01:08.093038 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:01:08.093053 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:01:08.093068 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 02:01:08.093087 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:01:08.093102 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 02:01:08.093117 kernel: Using GB pages for direct mapping Dec 13 02:01:08.093133 kernel: Secure boot disabled Dec 13 02:01:08.093147 kernel: ACPI: Early table checksum verification disabled Dec 13 02:01:08.093163 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 02:01:08.093178 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 02:01:08.093193 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 02:01:08.093226 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 02:01:08.093241 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 02:01:08.093257 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 02:01:08.093273 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 02:01:08.093289 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 02:01:08.093305 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 02:01:08.093324 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 02:01:08.093340 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 02:01:08.093356 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 02:01:08.093372 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 02:01:08.093388 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 02:01:08.093403 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 02:01:08.093419 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 02:01:08.093435 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 02:01:08.093451 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 02:01:08.093469 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 02:01:08.093485 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 02:01:08.093501 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 02:01:08.093517 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 02:01:08.093533 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 02:01:08.093549 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 02:01:08.093565 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 02:01:08.093581 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 02:01:08.097776 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 02:01:08.097815 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Dec 13 02:01:08.097835 kernel: Zone ranges: Dec 13 02:01:08.097852 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:01:08.097869 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 02:01:08.097886 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 02:01:08.097903 kernel: Movable zone start for each node Dec 13 02:01:08.097920 kernel: Early memory node ranges Dec 13 02:01:08.097937 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 02:01:08.097954 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 02:01:08.097974 kernel: node 0: [mem 0x0000000000100000-0x00000000bd276fff] Dec 13 02:01:08.097991 kernel: node 0: [mem 0x00000000bd281000-0x00000000bf8ecfff] Dec 13 02:01:08.098008 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 02:01:08.098023 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 02:01:08.098039 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 02:01:08.098056 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:01:08.098072 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 02:01:08.098089 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 02:01:08.098105 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Dec 13 02:01:08.098126 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 02:01:08.098142 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 02:01:08.098159 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 02:01:08.098175 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:01:08.098192 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 02:01:08.098216 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:01:08.098233 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:01:08.098249 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:01:08.098266 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:01:08.098286 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:01:08.098303 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:01:08.098320 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 02:01:08.098336 kernel: Booting paravirtualized kernel on KVM Dec 13 02:01:08.098353 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:01:08.098370 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:01:08.098387 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 02:01:08.098404 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 02:01:08.098420 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:01:08.098439 kernel: kvm-guest: PV spinlocks enabled Dec 13 02:01:08.098456 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 02:01:08.098472 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Dec 13 02:01:08.098489 kernel: Policy zone: Normal Dec 13 02:01:08.098507 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:01:08.098524 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:01:08.098541 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 02:01:08.098557 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 02:01:08.098574 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:01:08.098611 kernel: Memory: 7515408K/7860544K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 344876K reserved, 0K cma-reserved) Dec 13 02:01:08.098628 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:01:08.098645 kernel: Kernel/User page tables isolation: enabled Dec 13 02:01:08.098662 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:01:08.098678 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:01:08.098695 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:01:08.098713 kernel: rcu: RCU event tracing is enabled. Dec 13 02:01:08.098730 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:01:08.098751 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:01:08.098780 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:01:08.098798 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:01:08.098819 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:01:08.098836 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:01:08.098854 kernel: Console: colour dummy device 80x25 Dec 13 02:01:08.098870 kernel: printk: console [ttyS0] enabled Dec 13 02:01:08.098887 kernel: ACPI: Core revision 20210730 Dec 13 02:01:08.098904 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:01:08.098922 kernel: x2apic enabled Dec 13 02:01:08.098944 kernel: Switched APIC routing to physical x2apic. Dec 13 02:01:08.098961 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 02:01:08.098979 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 02:01:08.098998 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 02:01:08.099016 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 02:01:08.099034 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 02:01:08.099052 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:01:08.099073 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 02:01:08.099102 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 02:01:08.099121 kernel: Spectre V2 : Mitigation: IBRS Dec 13 02:01:08.099138 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:01:08.099156 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:01:08.099173 kernel: RETBleed: Mitigation: IBRS Dec 13 02:01:08.099191 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 02:01:08.099215 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Dec 13 02:01:08.099233 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 02:01:08.099254 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 02:01:08.099272 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:01:08.099289 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:01:08.099307 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:01:08.099325 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:01:08.099342 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:01:08.099360 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 02:01:08.099378 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:01:08.099395 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:01:08.099416 kernel: LSM: Security Framework initializing Dec 13 02:01:08.099433 kernel: SELinux: Initializing. Dec 13 02:01:08.099450 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:01:08.099467 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:01:08.099486 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 02:01:08.099504 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 02:01:08.099521 kernel: signal: max sigframe size: 1776 Dec 13 02:01:08.099547 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:01:08.099565 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 02:01:08.099585 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:01:08.099625 kernel: x86: Booting SMP configuration: Dec 13 02:01:08.099643 kernel: .... node #0, CPUs: #1 Dec 13 02:01:08.099660 kernel: kvm-clock: cpu 1, msr 1c019b041, secondary cpu clock Dec 13 02:01:08.099678 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 02:01:08.099697 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 02:01:08.099715 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:01:08.099733 kernel: smpboot: Max logical packages: 1 Dec 13 02:01:08.099754 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 02:01:08.099771 kernel: devtmpfs: initialized Dec 13 02:01:08.099795 kernel: x86/mm: Memory block size: 128MB Dec 13 02:01:08.099820 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 02:01:08.099838 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:01:08.099856 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:01:08.099874 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:01:08.099891 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:01:08.099909 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:01:08.099929 kernel: audit: type=2000 audit(1734055267.160:1): state=initialized audit_enabled=0 res=1 Dec 13 02:01:08.099947 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:01:08.099964 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:01:08.099981 kernel: cpuidle: using governor menu Dec 13 02:01:08.099997 kernel: ACPI: bus type PCI registered Dec 13 02:01:08.100014 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:01:08.100032 kernel: dca service started, version 1.12.1 Dec 13 02:01:08.100050 kernel: PCI: Using configuration type 1 for base access Dec 13 02:01:08.100068 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:01:08.100090 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:01:08.100108 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:01:08.100126 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:01:08.100143 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:01:08.100161 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:01:08.100179 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:01:08.100197 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:01:08.100221 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:01:08.100238 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:01:08.100258 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 02:01:08.100276 kernel: ACPI: Interpreter enabled Dec 13 02:01:08.100294 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 02:01:08.100312 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:01:08.100330 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:01:08.100347 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 02:01:08.100365 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:01:08.100651 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:01:08.100838 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 02:01:08.100862 kernel: PCI host bridge to bus 0000:00 Dec 13 02:01:08.101023 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:01:08.101174 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:01:08.101347 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:01:08.101495 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 02:01:08.101895 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:01:08.102385 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 02:01:08.102813 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 02:01:08.102996 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 02:01:08.103170 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 02:01:08.103358 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 02:01:08.103526 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 02:01:08.110727 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 02:01:08.110948 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 02:01:08.111129 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 02:01:08.111291 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 02:01:08.111460 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 02:01:08.111662 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 02:01:08.111826 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 02:01:08.111856 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:01:08.111873 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:01:08.111891 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:01:08.111908 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:01:08.111925 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 02:01:08.111943 kernel: iommu: Default domain type: Translated Dec 13 02:01:08.111960 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:01:08.111977 kernel: vgaarb: loaded Dec 13 02:01:08.111994 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:01:08.112015 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:01:08.112032 kernel: PTP clock support registered Dec 13 02:01:08.112050 kernel: Registered efivars operations Dec 13 02:01:08.112066 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:01:08.112083 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:01:08.112100 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 02:01:08.112118 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 02:01:08.112134 kernel: e820: reserve RAM buffer [mem 0xbd277000-0xbfffffff] Dec 13 02:01:08.112151 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 02:01:08.112171 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 02:01:08.112188 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:01:08.112204 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:01:08.112222 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:01:08.112239 kernel: pnp: PnP ACPI init Dec 13 02:01:08.112256 kernel: pnp: PnP ACPI: found 7 devices Dec 13 02:01:08.112273 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:01:08.112289 kernel: NET: Registered PF_INET protocol family Dec 13 02:01:08.112306 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:01:08.112327 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 02:01:08.112344 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:01:08.112362 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 02:01:08.112379 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 02:01:08.112396 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 02:01:08.112413 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 02:01:08.112431 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 02:01:08.112448 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:01:08.112468 kernel: NET: Registered PF_XDP protocol family Dec 13 02:01:08.112648 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:01:08.112805 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:01:08.112947 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:01:08.113090 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 02:01:08.113261 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 02:01:08.113286 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:01:08.113312 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 02:01:08.113332 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 02:01:08.113353 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 02:01:08.113372 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 02:01:08.113392 kernel: clocksource: Switched to clocksource tsc Dec 13 02:01:08.113410 kernel: Initialise system trusted keyrings Dec 13 02:01:08.113429 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 02:01:08.113448 kernel: Key type asymmetric registered Dec 13 02:01:08.113466 kernel: Asymmetric key parser 'x509' registered Dec 13 02:01:08.113488 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:01:08.113505 kernel: io scheduler mq-deadline registered Dec 13 02:01:08.113523 kernel: io scheduler kyber registered Dec 13 02:01:08.113550 kernel: io scheduler bfq registered Dec 13 02:01:08.113568 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:01:08.125887 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 02:01:08.126274 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 02:01:08.126305 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 02:01:08.126789 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 02:01:08.126827 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 02:01:08.127107 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 02:01:08.127132 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:01:08.127151 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:01:08.127169 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 02:01:08.127187 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 02:01:08.127205 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 02:01:08.127382 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 02:01:08.127414 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:01:08.127432 kernel: i8042: Warning: Keylock active Dec 13 02:01:08.127451 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:01:08.127468 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:01:08.133847 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 02:01:08.134434 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 02:01:08.134891 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T02:01:07 UTC (1734055267) Dec 13 02:01:08.135179 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 02:01:08.135212 kernel: intel_pstate: CPU model not supported Dec 13 02:01:08.135231 kernel: pstore: Registered efi as persistent store backend Dec 13 02:01:08.135250 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:01:08.135267 kernel: Segment Routing with IPv6 Dec 13 02:01:08.135286 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:01:08.135303 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:01:08.135321 kernel: Key type dns_resolver registered Dec 13 02:01:08.135339 kernel: IPI shorthand broadcast: enabled Dec 13 02:01:08.135357 kernel: sched_clock: Marking stable (742618652, 144226697)->(928960614, -42115265) Dec 13 02:01:08.135378 kernel: registered taskstats version 1 Dec 13 02:01:08.135396 kernel: Loading compiled-in X.509 certificates Dec 13 02:01:08.135413 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:01:08.135432 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:01:08.135449 kernel: Key type .fscrypt registered Dec 13 02:01:08.135467 kernel: Key type fscrypt-provisioning registered Dec 13 02:01:08.135484 kernel: pstore: Using crash dump compression: deflate Dec 13 02:01:08.135502 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:01:08.135520 kernel: ima: No architecture policies found Dec 13 02:01:08.135549 kernel: clk: Disabling unused clocks Dec 13 02:01:08.135566 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:01:08.135584 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:01:08.135644 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:01:08.135662 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:01:08.135679 kernel: Run /init as init process Dec 13 02:01:08.135697 kernel: with arguments: Dec 13 02:01:08.135714 kernel: /init Dec 13 02:01:08.135731 kernel: with environment: Dec 13 02:01:08.135752 kernel: HOME=/ Dec 13 02:01:08.135770 kernel: TERM=linux Dec 13 02:01:08.135787 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:01:08.135809 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:01:08.135831 systemd[1]: Detected virtualization kvm. Dec 13 02:01:08.135851 systemd[1]: Detected architecture x86-64. Dec 13 02:01:08.135869 systemd[1]: Running in initrd. Dec 13 02:01:08.135890 systemd[1]: No hostname configured, using default hostname. Dec 13 02:01:08.135908 systemd[1]: Hostname set to . Dec 13 02:01:08.135928 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:01:08.135946 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:01:08.135964 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:01:08.135983 systemd[1]: Reached target cryptsetup.target. Dec 13 02:01:08.136001 systemd[1]: Reached target paths.target. Dec 13 02:01:08.136019 systemd[1]: Reached target slices.target. Dec 13 02:01:08.136041 systemd[1]: Reached target swap.target. Dec 13 02:01:08.136058 systemd[1]: Reached target timers.target. Dec 13 02:01:08.136078 systemd[1]: Listening on iscsid.socket. Dec 13 02:01:08.136097 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:01:08.136115 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:01:08.136133 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:01:08.136152 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:01:08.136170 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:01:08.136192 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:01:08.136210 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:01:08.136248 systemd[1]: Reached target sockets.target. Dec 13 02:01:08.136270 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:01:08.136288 systemd[1]: Finished network-cleanup.service. Dec 13 02:01:08.136307 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:01:08.136326 systemd[1]: Starting systemd-journald.service... Dec 13 02:01:08.136349 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:01:08.136368 systemd[1]: Starting systemd-resolved.service... Dec 13 02:01:08.136387 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:01:08.136407 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:01:08.136426 kernel: audit: type=1130 audit(1734055268.092:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.136443 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:01:08.136461 kernel: audit: type=1130 audit(1734055268.102:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.136479 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:01:08.136501 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:01:08.136519 kernel: audit: type=1130 audit(1734055268.126:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.136545 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:01:08.136570 systemd-journald[190]: Journal started Dec 13 02:01:08.136682 systemd-journald[190]: Runtime Journal (/run/log/journal/b4ff8d8716853d802997cd8f2bd8f09b) is 8.0M, max 148.8M, 140.8M free. Dec 13 02:01:08.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.121668 systemd-modules-load[191]: Inserted module 'overlay' Dec 13 02:01:08.159866 kernel: audit: type=1130 audit(1734055268.135:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.159906 systemd[1]: Started systemd-journald.service. Dec 13 02:01:08.159934 kernel: audit: type=1130 audit(1734055268.151:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.160962 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:01:08.182190 systemd-resolved[192]: Positive Trust Anchors: Dec 13 02:01:08.182643 systemd-resolved[192]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:01:08.182807 systemd-resolved[192]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:01:08.189913 systemd-resolved[192]: Defaulting to hostname 'linux'. Dec 13 02:01:08.191683 systemd[1]: Started systemd-resolved.service. Dec 13 02:01:08.191933 systemd[1]: Reached target nss-lookup.target. Dec 13 02:01:08.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.198150 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:01:08.207728 kernel: audit: type=1130 audit(1734055268.190:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.207774 kernel: audit: type=1130 audit(1734055268.200:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.203169 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:01:08.211721 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:01:08.220435 systemd-modules-load[191]: Inserted module 'br_netfilter' Dec 13 02:01:08.223717 kernel: Bridge firewalling registered Dec 13 02:01:08.223753 dracut-cmdline[206]: dracut-dracut-053 Dec 13 02:01:08.233737 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:01:08.255279 kernel: SCSI subsystem initialized Dec 13 02:01:08.273231 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:01:08.273308 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:01:08.274860 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:01:08.279721 systemd-modules-load[191]: Inserted module 'dm_multipath' Dec 13 02:01:08.281121 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:01:08.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.289938 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:01:08.297725 kernel: audit: type=1130 audit(1734055268.287:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.306765 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:01:08.317748 kernel: audit: type=1130 audit(1734055268.309:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.327632 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:01:08.347633 kernel: iscsi: registered transport (tcp) Dec 13 02:01:08.374848 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:01:08.374934 kernel: QLogic iSCSI HBA Driver Dec 13 02:01:08.420320 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:01:08.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.426477 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:01:08.484670 kernel: raid6: avx2x4 gen() 18385 MB/s Dec 13 02:01:08.502641 kernel: raid6: avx2x4 xor() 8201 MB/s Dec 13 02:01:08.519664 kernel: raid6: avx2x2 gen() 18424 MB/s Dec 13 02:01:08.536650 kernel: raid6: avx2x2 xor() 18353 MB/s Dec 13 02:01:08.554649 kernel: raid6: avx2x1 gen() 13971 MB/s Dec 13 02:01:08.571645 kernel: raid6: avx2x1 xor() 16135 MB/s Dec 13 02:01:08.589637 kernel: raid6: sse2x4 gen() 11002 MB/s Dec 13 02:01:08.607636 kernel: raid6: sse2x4 xor() 6629 MB/s Dec 13 02:01:08.624664 kernel: raid6: sse2x2 gen() 12026 MB/s Dec 13 02:01:08.641706 kernel: raid6: sse2x2 xor() 7317 MB/s Dec 13 02:01:08.658641 kernel: raid6: sse2x1 gen() 10445 MB/s Dec 13 02:01:08.676216 kernel: raid6: sse2x1 xor() 5170 MB/s Dec 13 02:01:08.676253 kernel: raid6: using algorithm avx2x2 gen() 18424 MB/s Dec 13 02:01:08.676276 kernel: raid6: .... xor() 18353 MB/s, rmw enabled Dec 13 02:01:08.676907 kernel: raid6: using avx2x2 recovery algorithm Dec 13 02:01:08.692635 kernel: xor: automatically using best checksumming function avx Dec 13 02:01:08.800649 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:01:08.812430 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:01:08.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.811000 audit: BPF prog-id=7 op=LOAD Dec 13 02:01:08.811000 audit: BPF prog-id=8 op=LOAD Dec 13 02:01:08.814165 systemd[1]: Starting systemd-udevd.service... Dec 13 02:01:08.832468 systemd-udevd[389]: Using default interface naming scheme 'v252'. Dec 13 02:01:08.840047 systemd[1]: Started systemd-udevd.service. Dec 13 02:01:08.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.843229 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:01:08.863425 dracut-pre-trigger[397]: rd.md=0: removing MD RAID activation Dec 13 02:01:08.901076 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:01:08.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:08.903361 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:01:08.968996 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:01:08.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:09.050621 kernel: scsi host0: Virtio SCSI HBA Dec 13 02:01:09.057642 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 02:01:09.064655 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:01:09.118103 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:01:09.118183 kernel: AES CTR mode by8 optimization enabled Dec 13 02:01:09.179993 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 02:01:09.202610 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 02:01:09.202860 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 02:01:09.203071 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 02:01:09.203284 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 02:01:09.203508 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:01:09.203541 kernel: GPT:17805311 != 25165823 Dec 13 02:01:09.203563 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:01:09.203586 kernel: GPT:17805311 != 25165823 Dec 13 02:01:09.203623 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:01:09.203644 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:01:09.203667 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 02:01:09.262050 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:01:09.285773 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (439) Dec 13 02:01:09.299756 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:01:09.300003 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:01:09.330528 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:01:09.347338 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:01:09.361832 systemd[1]: Starting disk-uuid.service... Dec 13 02:01:09.382896 disk-uuid[512]: Primary Header is updated. Dec 13 02:01:09.382896 disk-uuid[512]: Secondary Entries is updated. Dec 13 02:01:09.382896 disk-uuid[512]: Secondary Header is updated. Dec 13 02:01:09.422741 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:01:09.422779 kernel: GPT:disk_guids don't match. Dec 13 02:01:09.422803 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:01:09.422823 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:01:09.445634 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:01:10.432652 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:01:10.432878 disk-uuid[513]: The operation has completed successfully. Dec 13 02:01:10.498893 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:01:10.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:10.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:10.499028 systemd[1]: Finished disk-uuid.service. Dec 13 02:01:10.520803 systemd[1]: Starting verity-setup.service... Dec 13 02:01:10.547631 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 02:01:10.629573 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:01:10.632225 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:01:10.655330 systemd[1]: Finished verity-setup.service. Dec 13 02:01:10.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:10.735663 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:01:10.736292 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:01:10.744039 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:01:10.790773 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:01:10.790815 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:01:10.790837 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:01:10.745080 systemd[1]: Starting ignition-setup.service... Dec 13 02:01:10.804762 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:01:10.759944 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:01:10.835434 systemd[1]: Finished ignition-setup.service. Dec 13 02:01:10.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:10.836818 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:01:10.882845 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:01:10.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:10.882000 audit: BPF prog-id=9 op=LOAD Dec 13 02:01:10.885038 systemd[1]: Starting systemd-networkd.service... Dec 13 02:01:10.920584 systemd-networkd[687]: lo: Link UP Dec 13 02:01:10.920621 systemd-networkd[687]: lo: Gained carrier Dec 13 02:01:10.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:10.921658 systemd-networkd[687]: Enumeration completed Dec 13 02:01:10.922158 systemd-networkd[687]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:01:10.923975 systemd[1]: Started systemd-networkd.service. Dec 13 02:01:10.924393 systemd-networkd[687]: eth0: Link UP Dec 13 02:01:10.924400 systemd-networkd[687]: eth0: Gained carrier Dec 13 02:01:10.928044 systemd[1]: Reached target network.target. Dec 13 02:01:10.934856 systemd-networkd[687]: eth0: DHCPv4 address 10.128.0.4/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 02:01:10.950989 systemd[1]: Starting iscsiuio.service... Dec 13 02:01:11.022921 systemd[1]: Started iscsiuio.service. Dec 13 02:01:11.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:11.031159 systemd[1]: Starting iscsid.service... Dec 13 02:01:11.043767 iscsid[696]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:01:11.043767 iscsid[696]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 02:01:11.043767 iscsid[696]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:01:11.043767 iscsid[696]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:01:11.043767 iscsid[696]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:01:11.043767 iscsid[696]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:01:11.043767 iscsid[696]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:01:11.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:11.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:11.056167 systemd[1]: Started iscsid.service. Dec 13 02:01:11.136511 ignition[647]: Ignition 2.14.0 Dec 13 02:01:11.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:11.094969 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:01:11.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:11.136524 ignition[647]: Stage: fetch-offline Dec 13 02:01:11.118901 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:01:11.136611 ignition[647]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:01:11.123069 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:01:11.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:11.136673 ignition[647]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:01:11.137936 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:01:11.159057 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:01:11.154928 systemd[1]: Reached target remote-fs.target. Dec 13 02:01:11.159293 ignition[647]: parsed url from cmdline: "" Dec 13 02:01:11.174050 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:01:11.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:11.159300 ignition[647]: no config URL provided Dec 13 02:01:11.197131 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:01:11.159309 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:01:11.212072 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:01:11.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:11.159324 ignition[647]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:01:11.226933 systemd[1]: Starting ignition-fetch.service... Dec 13 02:01:11.159334 ignition[647]: failed to fetch config: resource requires networking Dec 13 02:01:11.260076 unknown[711]: fetched base config from "system" Dec 13 02:01:11.159480 ignition[647]: Ignition finished successfully Dec 13 02:01:11.260090 unknown[711]: fetched base config from "system" Dec 13 02:01:11.238460 ignition[711]: Ignition 2.14.0 Dec 13 02:01:11.260101 unknown[711]: fetched user config from "gcp" Dec 13 02:01:11.238469 ignition[711]: Stage: fetch Dec 13 02:01:11.262838 systemd[1]: Finished ignition-fetch.service. Dec 13 02:01:11.238644 ignition[711]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:01:11.273935 systemd[1]: Starting ignition-kargs.service... Dec 13 02:01:11.238682 ignition[711]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:01:11.308140 systemd[1]: Finished ignition-kargs.service. Dec 13 02:01:11.246726 ignition[711]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:01:11.324203 systemd[1]: Starting ignition-disks.service... Dec 13 02:01:11.246913 ignition[711]: parsed url from cmdline: "" Dec 13 02:01:11.347330 systemd[1]: Finished ignition-disks.service. Dec 13 02:01:11.246918 ignition[711]: no config URL provided Dec 13 02:01:11.362141 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:01:11.246925 ignition[711]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:01:11.383825 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:01:11.246935 ignition[711]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:01:11.397802 systemd[1]: Reached target local-fs.target. Dec 13 02:01:11.246970 ignition[711]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 02:01:11.411803 systemd[1]: Reached target sysinit.target. Dec 13 02:01:11.252732 ignition[711]: GET result: OK Dec 13 02:01:11.426817 systemd[1]: Reached target basic.target. Dec 13 02:01:11.252833 ignition[711]: parsing config with SHA512: dc47f306d79dcde1760564152908e375c060d2a7fe5e9d17d9191e413bb1c52a48861bbfb4c560a2b0d637b6df3b9185c95b78462fdec49da8e6b5432dccab9a Dec 13 02:01:11.441038 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:01:11.261145 ignition[711]: fetch: fetch complete Dec 13 02:01:11.261153 ignition[711]: fetch: fetch passed Dec 13 02:01:11.261207 ignition[711]: Ignition finished successfully Dec 13 02:01:11.287320 ignition[717]: Ignition 2.14.0 Dec 13 02:01:11.287328 ignition[717]: Stage: kargs Dec 13 02:01:11.287457 ignition[717]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:01:11.287487 ignition[717]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:01:11.295931 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:01:11.297321 ignition[717]: kargs: kargs passed Dec 13 02:01:11.297371 ignition[717]: Ignition finished successfully Dec 13 02:01:11.336795 ignition[723]: Ignition 2.14.0 Dec 13 02:01:11.336805 ignition[723]: Stage: disks Dec 13 02:01:11.336939 ignition[723]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:01:11.336971 ignition[723]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:01:11.344781 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:01:11.346079 ignition[723]: disks: disks passed Dec 13 02:01:11.346130 ignition[723]: Ignition finished successfully Dec 13 02:01:11.484435 systemd-fsck[731]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 02:01:11.674517 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:01:11.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:11.683904 systemd[1]: Mounting sysroot.mount... Dec 13 02:01:11.713777 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:01:11.711409 systemd[1]: Mounted sysroot.mount. Dec 13 02:01:11.721099 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:01:11.741056 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:01:11.758256 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 02:01:11.758342 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:01:11.758385 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:01:11.779182 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:01:11.807349 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:01:11.849969 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (737) Dec 13 02:01:11.850011 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:01:11.850036 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:01:11.850058 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:01:11.823004 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:01:11.855769 initrd-setup-root[742]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:01:11.883758 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:01:11.883225 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:01:11.891898 initrd-setup-root[766]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:01:11.909726 initrd-setup-root[774]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:01:11.919848 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:01:11.958216 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:01:11.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:11.959530 systemd[1]: Starting ignition-mount.service... Dec 13 02:01:11.981852 systemd[1]: Starting sysroot-boot.service... Dec 13 02:01:11.997164 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 02:01:11.997458 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 02:01:12.025138 ignition[803]: INFO : Ignition 2.14.0 Dec 13 02:01:12.025138 ignition[803]: INFO : Stage: mount Dec 13 02:01:12.025138 ignition[803]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:01:12.025138 ignition[803]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:01:12.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:12.027378 systemd[1]: Finished sysroot-boot.service. Dec 13 02:01:12.113914 kernel: kauditd_printk_skb: 25 callbacks suppressed Dec 13 02:01:12.113947 kernel: audit: type=1130 audit(1734055272.080:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:12.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:12.114014 ignition[803]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:01:12.114014 ignition[803]: INFO : mount: mount passed Dec 13 02:01:12.114014 ignition[803]: INFO : Ignition finished successfully Dec 13 02:01:12.186752 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (812) Dec 13 02:01:12.186793 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:01:12.186816 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:01:12.186838 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:01:12.186860 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:01:12.046232 systemd[1]: Finished ignition-mount.service. Dec 13 02:01:12.083419 systemd[1]: Starting ignition-files.service... Dec 13 02:01:12.124695 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:01:12.185446 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:01:12.227826 ignition[831]: INFO : Ignition 2.14.0 Dec 13 02:01:12.227826 ignition[831]: INFO : Stage: files Dec 13 02:01:12.227826 ignition[831]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:01:12.227826 ignition[831]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:01:12.227826 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:01:12.227826 ignition[831]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:01:12.227826 ignition[831]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:01:12.227826 ignition[831]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:01:12.330772 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (831) Dec 13 02:01:12.235272 unknown[831]: wrote ssh authorized keys file for user: core Dec 13 02:01:12.339770 ignition[831]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:01:12.339770 ignition[831]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:01:12.339770 ignition[831]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:01:12.339770 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Dec 13 02:01:12.339770 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:01:12.339770 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3787931728" Dec 13 02:01:12.339770 ignition[831]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3787931728": device or resource busy Dec 13 02:01:12.339770 ignition[831]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3787931728", trying btrfs: device or resource busy Dec 13 02:01:12.339770 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3787931728" Dec 13 02:01:12.339770 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3787931728" Dec 13 02:01:12.339770 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem3787931728" Dec 13 02:01:12.339770 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem3787931728" Dec 13 02:01:12.339770 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Dec 13 02:01:12.339770 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:01:12.339770 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 02:01:12.571763 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Dec 13 02:01:12.481751 systemd-networkd[687]: eth0: Gained IPv6LL Dec 13 02:01:12.589724 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:01:12.589724 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:01:12.589724 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 02:01:12.926661 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Dec 13 02:01:13.162164 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:01:13.177775 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 02:01:13.177775 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:01:13.177775 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2491939649" Dec 13 02:01:13.177775 ignition[831]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2491939649": device or resource busy Dec 13 02:01:13.177775 ignition[831]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2491939649", trying btrfs: device or resource busy Dec 13 02:01:13.177775 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2491939649" Dec 13 02:01:13.177775 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2491939649" Dec 13 02:01:13.177775 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem2491939649" Dec 13 02:01:13.177775 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem2491939649" Dec 13 02:01:13.177775 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 02:01:13.177775 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:01:13.177775 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:01:13.177775 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:01:13.177775 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:01:13.177775 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:01:13.177544 systemd[1]: mnt-oem2491939649.mount: Deactivated successfully. Dec 13 02:01:13.433761 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:01:13.433761 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:01:13.433761 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:01:13.433761 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:01:13.433761 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:01:13.433761 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:01:13.433761 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:01:13.433761 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 02:01:13.433761 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:01:13.433761 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1486122341" Dec 13 02:01:13.433761 ignition[831]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1486122341": device or resource busy Dec 13 02:01:13.433761 ignition[831]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1486122341", trying btrfs: device or resource busy Dec 13 02:01:13.433761 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1486122341" Dec 13 02:01:13.433761 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1486122341" Dec 13 02:01:13.205566 systemd[1]: mnt-oem1486122341.mount: Deactivated successfully. Dec 13 02:01:13.689776 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem1486122341" Dec 13 02:01:13.689776 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem1486122341" Dec 13 02:01:13.689776 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 02:01:13.689776 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:01:13.689776 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 02:01:13.689776 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK Dec 13 02:01:14.186945 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:01:14.186945 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 02:01:14.222880 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:01:14.222880 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2100996676" Dec 13 02:01:14.222880 ignition[831]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2100996676": device or resource busy Dec 13 02:01:14.222880 ignition[831]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2100996676", trying btrfs: device or resource busy Dec 13 02:01:14.222880 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2100996676" Dec 13 02:01:14.222880 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2100996676" Dec 13 02:01:14.222880 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem2100996676" Dec 13 02:01:14.222880 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem2100996676" Dec 13 02:01:14.222880 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 02:01:14.222880 ignition[831]: INFO : files: op(1c): [started] processing unit "oem-gce-enable-oslogin.service" Dec 13 02:01:14.222880 ignition[831]: INFO : files: op(1c): [finished] processing unit "oem-gce-enable-oslogin.service" Dec 13 02:01:14.222880 ignition[831]: INFO : files: op(1d): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:01:14.222880 ignition[831]: INFO : files: op(1d): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:01:14.222880 ignition[831]: INFO : files: op(1e): [started] processing unit "oem-gce.service" Dec 13 02:01:14.222880 ignition[831]: INFO : files: op(1e): [finished] processing unit "oem-gce.service" Dec 13 02:01:14.222880 ignition[831]: INFO : files: op(1f): [started] processing unit "prepare-helm.service" Dec 13 02:01:14.222880 ignition[831]: INFO : files: op(1f): op(20): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:01:14.703785 kernel: audit: type=1130 audit(1734055274.229:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.703839 kernel: audit: type=1130 audit(1734055274.346:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.703865 kernel: audit: type=1130 audit(1734055274.384:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.703890 kernel: audit: type=1131 audit(1734055274.384:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.703911 kernel: audit: type=1130 audit(1734055274.537:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.703946 kernel: audit: type=1131 audit(1734055274.537:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.703962 kernel: audit: type=1130 audit(1734055274.650:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.210915 systemd[1]: Finished ignition-files.service. Dec 13 02:01:14.718781 ignition[831]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:01:14.718781 ignition[831]: INFO : files: op(1f): [finished] processing unit "prepare-helm.service" Dec 13 02:01:14.718781 ignition[831]: INFO : files: op(21): [started] setting preset to enabled for "oem-gce.service" Dec 13 02:01:14.718781 ignition[831]: INFO : files: op(21): [finished] setting preset to enabled for "oem-gce.service" Dec 13 02:01:14.718781 ignition[831]: INFO : files: op(22): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:01:14.718781 ignition[831]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:01:14.718781 ignition[831]: INFO : files: op(23): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 02:01:14.718781 ignition[831]: INFO : files: op(23): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 02:01:14.718781 ignition[831]: INFO : files: op(24): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:01:14.718781 ignition[831]: INFO : files: op(24): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:01:14.718781 ignition[831]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:01:14.718781 ignition[831]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:01:14.718781 ignition[831]: INFO : files: files passed Dec 13 02:01:14.718781 ignition[831]: INFO : Ignition finished successfully Dec 13 02:01:14.970974 kernel: audit: type=1131 audit(1734055274.776:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.241361 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:01:14.268976 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:01:15.019912 initrd-setup-root-after-ignition[854]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:01:14.270125 systemd[1]: Starting ignition-quench.service... Dec 13 02:01:14.317270 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:01:15.092785 kernel: audit: type=1131 audit(1734055275.062:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:15.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.348443 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:01:14.348586 systemd[1]: Finished ignition-quench.service. Dec 13 02:01:15.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.386183 systemd[1]: Reached target ignition-complete.target. Dec 13 02:01:15.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.469952 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:01:15.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.518404 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:01:14.518518 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:01:15.174772 ignition[869]: INFO : Ignition 2.14.0 Dec 13 02:01:15.174772 ignition[869]: INFO : Stage: umount Dec 13 02:01:15.174772 ignition[869]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:01:15.174772 ignition[869]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:01:15.174772 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:01:15.174772 ignition[869]: INFO : umount: umount passed Dec 13 02:01:15.174772 ignition[869]: INFO : Ignition finished successfully Dec 13 02:01:15.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:15.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:15.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.539050 systemd[1]: Reached target initrd-fs.target. Dec 13 02:01:15.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.599811 systemd[1]: Reached target initrd.target. Dec 13 02:01:15.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.599953 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:01:15.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.601192 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:01:15.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.618192 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:01:15.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.653395 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:01:15.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.691049 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:01:14.712081 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:01:15.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.719124 systemd[1]: Stopped target timers.target. Dec 13 02:01:14.743136 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:01:14.743320 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:01:14.778334 systemd[1]: Stopped target initrd.target. Dec 13 02:01:14.829307 systemd[1]: Stopped target basic.target. Dec 13 02:01:14.842149 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:01:14.861127 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:01:15.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.881163 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:01:15.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:14.901135 systemd[1]: Stopped target remote-fs.target. Dec 13 02:01:14.922121 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:01:14.964102 systemd[1]: Stopped target sysinit.target. Dec 13 02:01:14.985087 systemd[1]: Stopped target local-fs.target. Dec 13 02:01:15.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:15.002944 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:01:15.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:15.027950 systemd[1]: Stopped target swap.target. Dec 13 02:01:15.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:15.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:15.592000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:01:15.048990 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:01:15.049194 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:01:15.064212 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:01:15.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:15.101017 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:01:15.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:15.101219 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:01:15.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:15.117127 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:01:15.117308 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:01:15.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:15.134113 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:01:15.134289 systemd[1]: Stopped ignition-files.service. Dec 13 02:01:15.151590 systemd[1]: Stopping ignition-mount.service... Dec 13 02:01:15.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:15.182073 systemd[1]: Stopping iscsiuio.service... Dec 13 02:01:15.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:15.189911 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:01:15.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:15.190119 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:01:15.202733 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:01:15.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:15.217898 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:01:15.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:15.218110 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:01:15.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:15.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:15.259267 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:01:15.259456 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:01:15.270165 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:01:15.271251 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 02:01:15.271468 systemd[1]: Stopped iscsiuio.service. Dec 13 02:01:15.916780 systemd-journald[190]: Received SIGTERM from PID 1 (n/a). Dec 13 02:01:15.916851 iscsid[696]: iscsid shutting down. Dec 13 02:01:15.287566 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:01:15.287717 systemd[1]: Stopped ignition-mount.service. Dec 13 02:01:15.302384 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:01:15.302490 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:01:15.317497 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:01:15.317666 systemd[1]: Stopped ignition-disks.service. Dec 13 02:01:15.335839 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:01:15.335927 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:01:15.351850 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:01:15.351934 systemd[1]: Stopped ignition-fetch.service. Dec 13 02:01:15.366810 systemd[1]: Stopped target network.target. Dec 13 02:01:15.379746 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:01:15.379868 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:01:15.394856 systemd[1]: Stopped target paths.target. Dec 13 02:01:15.409740 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:01:15.413707 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:01:15.424733 systemd[1]: Stopped target slices.target. Dec 13 02:01:15.438732 systemd[1]: Stopped target sockets.target. Dec 13 02:01:15.451808 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:01:15.451873 systemd[1]: Closed iscsid.socket. Dec 13 02:01:15.465827 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:01:15.465911 systemd[1]: Closed iscsiuio.socket. Dec 13 02:01:15.479807 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:01:15.479904 systemd[1]: Stopped ignition-setup.service. Dec 13 02:01:15.495853 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:01:15.495930 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:01:15.512003 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:01:15.515651 systemd-networkd[687]: eth0: DHCPv6 lease lost Dec 13 02:01:15.528047 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:01:15.543353 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:01:15.543477 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:01:15.556620 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:01:15.556761 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:01:15.578712 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:01:15.578829 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:01:15.594859 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:01:15.594904 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:01:15.609830 systemd[1]: Stopping network-cleanup.service... Dec 13 02:01:15.616887 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:01:15.616965 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:01:15.637907 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:01:15.637982 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:01:15.653030 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:01:15.653096 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:01:15.669004 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:01:15.684277 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:01:15.685004 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:01:15.685153 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:01:15.702325 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:01:15.702427 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:01:15.716936 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:01:15.716991 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:01:15.732918 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:01:15.732996 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:01:15.748002 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:01:15.748079 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:01:15.762949 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:01:15.763018 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:01:15.779038 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:01:15.795726 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:01:15.795844 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:01:15.811440 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:01:15.811627 systemd[1]: Stopped network-cleanup.service. Dec 13 02:01:15.826197 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:01:15.826316 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:01:15.844102 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:01:15.860872 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:01:15.878516 systemd[1]: Switching root. Dec 13 02:01:15.926691 systemd-journald[190]: Journal stopped Dec 13 02:01:20.581138 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:01:20.581261 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:01:20.581288 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:01:20.581316 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:01:20.581345 kernel: SELinux: policy capability open_perms=1 Dec 13 02:01:20.581367 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:01:20.581391 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:01:20.581414 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:01:20.581437 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:01:20.581466 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:01:20.581488 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:01:20.581512 systemd[1]: Successfully loaded SELinux policy in 111.527ms. Dec 13 02:01:20.581555 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.157ms. Dec 13 02:01:20.581580 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:01:20.581645 systemd[1]: Detected virtualization kvm. Dec 13 02:01:20.581671 systemd[1]: Detected architecture x86-64. Dec 13 02:01:20.581716 systemd[1]: Detected first boot. Dec 13 02:01:20.581741 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:01:20.581768 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:01:20.581793 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:01:20.581818 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:01:20.581844 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:01:20.581870 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:01:20.581901 kernel: kauditd_printk_skb: 48 callbacks suppressed Dec 13 02:01:20.581928 kernel: audit: type=1334 audit(1734055279.687:87): prog-id=12 op=LOAD Dec 13 02:01:20.581952 kernel: audit: type=1334 audit(1734055279.687:88): prog-id=3 op=UNLOAD Dec 13 02:01:20.581975 kernel: audit: type=1334 audit(1734055279.692:89): prog-id=13 op=LOAD Dec 13 02:01:20.581997 kernel: audit: type=1334 audit(1734055279.699:90): prog-id=14 op=LOAD Dec 13 02:01:20.582036 kernel: audit: type=1334 audit(1734055279.699:91): prog-id=4 op=UNLOAD Dec 13 02:01:20.582061 kernel: audit: type=1334 audit(1734055279.699:92): prog-id=5 op=UNLOAD Dec 13 02:01:20.582085 kernel: audit: type=1334 audit(1734055279.706:93): prog-id=15 op=LOAD Dec 13 02:01:20.582116 kernel: audit: type=1334 audit(1734055279.706:94): prog-id=12 op=UNLOAD Dec 13 02:01:20.582139 kernel: audit: type=1334 audit(1734055279.713:95): prog-id=16 op=LOAD Dec 13 02:01:20.582165 kernel: audit: type=1334 audit(1734055279.720:96): prog-id=17 op=LOAD Dec 13 02:01:20.582188 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 02:01:20.582253 systemd[1]: Stopped iscsid.service. Dec 13 02:01:20.582278 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:01:20.582303 systemd[1]: Stopped initrd-switch-root.service. Dec 13 02:01:20.582327 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:01:20.582354 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:01:20.582379 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:01:20.582409 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 02:01:20.582433 systemd[1]: Created slice system-getty.slice. Dec 13 02:01:20.582457 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:01:20.582484 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:01:20.582507 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:01:20.582532 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:01:20.582555 systemd[1]: Created slice user.slice. Dec 13 02:01:20.582578 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:01:20.582621 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:01:20.582645 systemd[1]: Set up automount boot.automount. Dec 13 02:01:20.582666 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:01:20.582691 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 02:01:20.582716 systemd[1]: Stopped target initrd-fs.target. Dec 13 02:01:20.582739 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 02:01:20.582762 systemd[1]: Reached target integritysetup.target. Dec 13 02:01:20.582786 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:01:20.582810 systemd[1]: Reached target remote-fs.target. Dec 13 02:01:20.582837 systemd[1]: Reached target slices.target. Dec 13 02:01:20.582860 systemd[1]: Reached target swap.target. Dec 13 02:01:20.582885 systemd[1]: Reached target torcx.target. Dec 13 02:01:20.582908 systemd[1]: Reached target veritysetup.target. Dec 13 02:01:20.582931 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:01:20.582955 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:01:20.582983 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:01:20.583007 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:01:20.583029 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:01:20.583051 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:01:20.583077 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:01:20.583110 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:01:20.583133 systemd[1]: Mounting media.mount... Dec 13 02:01:20.583157 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:01:20.583199 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:01:20.583225 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:01:20.583246 systemd[1]: Mounting tmp.mount... Dec 13 02:01:20.583261 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:01:20.583277 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:01:20.583306 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:01:20.583331 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:01:20.583356 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:01:20.583380 systemd[1]: Starting modprobe@drm.service... Dec 13 02:01:20.583406 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:01:20.583429 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:01:20.583454 systemd[1]: Starting modprobe@loop.service... Dec 13 02:01:20.583478 kernel: fuse: init (API version 7.34) Dec 13 02:01:20.583495 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:01:20.583514 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:01:20.583530 kernel: loop: module loaded Dec 13 02:01:20.583544 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 02:01:20.583561 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:01:20.583576 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:01:20.583591 systemd[1]: Stopped systemd-journald.service. Dec 13 02:01:20.583651 systemd[1]: Starting systemd-journald.service... Dec 13 02:01:20.583676 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:01:20.583699 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:01:20.583753 systemd-journald[994]: Journal started Dec 13 02:01:20.583848 systemd-journald[994]: Runtime Journal (/run/log/journal/b4ff8d8716853d802997cd8f2bd8f09b) is 8.0M, max 148.8M, 140.8M free. Dec 13 02:01:15.925000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:01:16.237000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:01:16.400000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:01:16.400000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:01:16.400000 audit: BPF prog-id=10 op=LOAD Dec 13 02:01:16.400000 audit: BPF prog-id=10 op=UNLOAD Dec 13 02:01:16.400000 audit: BPF prog-id=11 op=LOAD Dec 13 02:01:16.400000 audit: BPF prog-id=11 op=UNLOAD Dec 13 02:01:16.566000 audit[902]: AVC avc: denied { associate } for pid=902 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 02:01:16.566000 audit[902]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001858e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=885 pid=902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:01:16.566000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:01:16.576000 audit[902]: AVC avc: denied { associate } for pid=902 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 02:01:16.576000 audit[902]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001859b9 a2=1ed a3=0 items=2 ppid=885 pid=902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:01:16.576000 audit: CWD cwd="/" Dec 13 02:01:16.576000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:16.576000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:16.576000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:01:19.687000 audit: BPF prog-id=12 op=LOAD Dec 13 02:01:19.687000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:01:19.692000 audit: BPF prog-id=13 op=LOAD Dec 13 02:01:19.699000 audit: BPF prog-id=14 op=LOAD Dec 13 02:01:19.699000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:01:19.699000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:01:19.706000 audit: BPF prog-id=15 op=LOAD Dec 13 02:01:19.706000 audit: BPF prog-id=12 op=UNLOAD Dec 13 02:01:19.713000 audit: BPF prog-id=16 op=LOAD Dec 13 02:01:19.720000 audit: BPF prog-id=17 op=LOAD Dec 13 02:01:19.720000 audit: BPF prog-id=13 op=UNLOAD Dec 13 02:01:19.720000 audit: BPF prog-id=14 op=UNLOAD Dec 13 02:01:19.727000 audit: BPF prog-id=18 op=LOAD Dec 13 02:01:19.727000 audit: BPF prog-id=15 op=UNLOAD Dec 13 02:01:19.762000 audit: BPF prog-id=19 op=LOAD Dec 13 02:01:19.762000 audit: BPF prog-id=20 op=LOAD Dec 13 02:01:19.762000 audit: BPF prog-id=16 op=UNLOAD Dec 13 02:01:19.762000 audit: BPF prog-id=17 op=UNLOAD Dec 13 02:01:19.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:19.777000 audit: BPF prog-id=18 op=UNLOAD Dec 13 02:01:19.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:19.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:19.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.538000 audit: BPF prog-id=21 op=LOAD Dec 13 02:01:20.538000 audit: BPF prog-id=22 op=LOAD Dec 13 02:01:20.538000 audit: BPF prog-id=23 op=LOAD Dec 13 02:01:20.538000 audit: BPF prog-id=19 op=UNLOAD Dec 13 02:01:20.538000 audit: BPF prog-id=20 op=UNLOAD Dec 13 02:01:20.570000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:01:20.570000 audit[994]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffcc7533af0 a2=4000 a3=7ffcc7533b8c items=0 ppid=1 pid=994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:01:20.570000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:01:16.560912 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:01:19.686969 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:01:16.562080 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:16Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:01:19.765118 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:01:16.562116 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:16Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:01:16.562171 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:16Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 02:01:16.562192 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:16Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 02:01:16.562254 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:16Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 02:01:16.562277 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:16Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 02:01:16.562607 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:16Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 02:01:16.562719 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:16Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:01:16.562744 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:16Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:01:16.565814 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:16Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 02:01:16.565902 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:16Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 02:01:16.565939 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:16Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 02:01:16.565968 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:16Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 02:01:16.566002 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:16Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 02:01:16.566029 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:16Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 02:01:19.071645 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:19Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:01:19.071949 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:19Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:01:19.072084 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:19Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:01:19.072567 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:19Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:01:19.073042 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:19Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 02:01:19.073226 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-12-13T02:01:19Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 02:01:20.595632 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:01:20.610639 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:01:20.624618 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:01:20.630633 systemd[1]: Stopped verity-setup.service. Dec 13 02:01:20.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.649996 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:01:20.658651 systemd[1]: Started systemd-journald.service. Dec 13 02:01:20.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.668117 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:01:20.675290 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:01:20.682990 systemd[1]: Mounted media.mount. Dec 13 02:01:20.689969 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:01:20.699964 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:01:20.708965 systemd[1]: Mounted tmp.mount. Dec 13 02:01:20.716108 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:01:20.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.725227 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:01:20.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.734246 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:01:20.734495 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:01:20.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.743247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:01:20.743473 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:01:20.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.752218 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:01:20.752438 systemd[1]: Finished modprobe@drm.service. Dec 13 02:01:20.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.761215 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:01:20.761442 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:01:20.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.770167 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:01:20.770378 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:01:20.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.779142 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:01:20.779346 systemd[1]: Finished modprobe@loop.service. Dec 13 02:01:20.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.788199 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:01:20.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.797232 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:01:20.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.806270 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:01:20.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.815277 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:01:20.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.824697 systemd[1]: Reached target network-pre.target. Dec 13 02:01:20.834921 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:01:20.846133 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:01:20.852726 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:01:20.855791 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:01:20.864530 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:01:20.874760 systemd-journald[994]: Time spent on flushing to /var/log/journal/b4ff8d8716853d802997cd8f2bd8f09b is 67.230ms for 1164 entries. Dec 13 02:01:20.874760 systemd-journald[994]: System Journal (/var/log/journal/b4ff8d8716853d802997cd8f2bd8f09b) is 8.0M, max 584.8M, 576.8M free. Dec 13 02:01:20.973921 systemd-journald[994]: Received client request to flush runtime journal. Dec 13 02:01:20.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.874776 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:01:20.876509 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:01:20.889802 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:01:20.891555 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:01:20.900631 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:01:20.975789 udevadm[1008]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 02:01:20.909443 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:01:20.920197 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:01:20.928894 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:01:20.938323 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:01:20.950526 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:01:20.959045 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:01:20.970844 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:01:20.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:20.980438 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:01:20.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:21.578557 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:01:21.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:21.586000 audit: BPF prog-id=24 op=LOAD Dec 13 02:01:21.586000 audit: BPF prog-id=25 op=LOAD Dec 13 02:01:21.586000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:01:21.586000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:01:21.588675 systemd[1]: Starting systemd-udevd.service... Dec 13 02:01:21.612359 systemd-udevd[1011]: Using default interface naming scheme 'v252'. Dec 13 02:01:21.659252 systemd[1]: Started systemd-udevd.service. Dec 13 02:01:21.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:21.669000 audit: BPF prog-id=26 op=LOAD Dec 13 02:01:21.671795 systemd[1]: Starting systemd-networkd.service... Dec 13 02:01:21.685000 audit: BPF prog-id=27 op=LOAD Dec 13 02:01:21.685000 audit: BPF prog-id=28 op=LOAD Dec 13 02:01:21.685000 audit: BPF prog-id=29 op=LOAD Dec 13 02:01:21.688093 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:01:21.740779 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 02:01:21.750145 systemd[1]: Started systemd-userdbd.service. Dec 13 02:01:21.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:21.876664 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1018) Dec 13 02:01:21.897621 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 02:01:21.906438 systemd-networkd[1025]: lo: Link UP Dec 13 02:01:21.906452 systemd-networkd[1025]: lo: Gained carrier Dec 13 02:01:21.907219 systemd-networkd[1025]: Enumeration completed Dec 13 02:01:21.907357 systemd[1]: Started systemd-networkd.service. Dec 13 02:01:21.907728 systemd-networkd[1025]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:01:21.910037 systemd-networkd[1025]: eth0: Link UP Dec 13 02:01:21.910055 systemd-networkd[1025]: eth0: Gained carrier Dec 13 02:01:21.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:21.921796 systemd-networkd[1025]: eth0: DHCPv4 address 10.128.0.4/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 02:01:21.941000 audit[1012]: AVC avc: denied { confidentiality } for pid=1012 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:01:21.974627 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:01:21.983630 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 02:01:21.941000 audit[1012]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e390919b00 a1=337fc a2=7f498bbe3bc5 a3=5 items=110 ppid=1011 pid=1012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:01:21.941000 audit: CWD cwd="/" Dec 13 02:01:21.941000 audit: PATH item=0 name=(null) inode=1033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=1 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=2 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=3 name=(null) inode=13767 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=4 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=5 name=(null) inode=13768 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=6 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=7 name=(null) inode=13769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=8 name=(null) inode=13769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=9 name=(null) inode=13770 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=10 name=(null) inode=13769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=11 name=(null) inode=13771 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=12 name=(null) inode=13769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=13 name=(null) inode=13772 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=14 name=(null) inode=13769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=15 name=(null) inode=13773 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=16 name=(null) inode=13769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=17 name=(null) inode=13774 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=18 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=19 name=(null) inode=13775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=20 name=(null) inode=13775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=21 name=(null) inode=13776 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=22 name=(null) inode=13775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=23 name=(null) inode=13777 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=24 name=(null) inode=13775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=25 name=(null) inode=13778 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=26 name=(null) inode=13775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=27 name=(null) inode=13779 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=28 name=(null) inode=13775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=29 name=(null) inode=13780 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=30 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=31 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=32 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=33 name=(null) inode=13782 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=34 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=35 name=(null) inode=13783 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=36 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=37 name=(null) inode=13784 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=38 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=39 name=(null) inode=13785 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=40 name=(null) inode=13781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=41 name=(null) inode=13786 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=42 name=(null) inode=13766 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=43 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=44 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=45 name=(null) inode=13788 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=46 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=47 name=(null) inode=13789 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=48 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=49 name=(null) inode=13790 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=50 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=51 name=(null) inode=13791 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=52 name=(null) inode=13787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=53 name=(null) inode=13792 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=54 name=(null) inode=1033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=55 name=(null) inode=13793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=56 name=(null) inode=13793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=57 name=(null) inode=13794 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=58 name=(null) inode=13793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=59 name=(null) inode=13795 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=60 name=(null) inode=13793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=61 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=62 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=63 name=(null) inode=13797 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=64 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=65 name=(null) inode=13798 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=66 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=67 name=(null) inode=13799 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=68 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=69 name=(null) inode=13800 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=70 name=(null) inode=13796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=71 name=(null) inode=13801 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=72 name=(null) inode=13793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=73 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=74 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=75 name=(null) inode=13803 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=76 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=77 name=(null) inode=13804 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=78 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=79 name=(null) inode=13805 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=80 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=81 name=(null) inode=13806 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=82 name=(null) inode=13802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=83 name=(null) inode=13807 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=84 name=(null) inode=13793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=85 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=86 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=87 name=(null) inode=13809 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=88 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=89 name=(null) inode=13810 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=90 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=91 name=(null) inode=13811 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=92 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=93 name=(null) inode=13812 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=94 name=(null) inode=13808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=95 name=(null) inode=13813 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=96 name=(null) inode=13793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=97 name=(null) inode=13814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=98 name=(null) inode=13814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=99 name=(null) inode=13815 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=100 name=(null) inode=13814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=101 name=(null) inode=13816 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=102 name=(null) inode=13814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=103 name=(null) inode=13817 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=104 name=(null) inode=13814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=105 name=(null) inode=13818 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=106 name=(null) inode=13814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=107 name=(null) inode=13819 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PATH item=109 name=(null) inode=13828 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:01:21.941000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:01:22.003796 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:01:22.010636 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 02:01:22.041629 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 13 02:01:22.058643 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 02:01:22.066659 kernel: EDAC MC: Ver: 3.0.0 Dec 13 02:01:22.077631 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:01:22.099252 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:01:22.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:22.109421 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:01:22.137196 lvm[1048]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:01:22.169008 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:01:22.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:22.177953 systemd[1]: Reached target cryptsetup.target. Dec 13 02:01:22.188214 systemd[1]: Starting lvm2-activation.service... Dec 13 02:01:22.194428 lvm[1049]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:01:22.222042 systemd[1]: Finished lvm2-activation.service. Dec 13 02:01:22.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:22.230957 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:01:22.239808 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:01:22.239878 systemd[1]: Reached target local-fs.target. Dec 13 02:01:22.248761 systemd[1]: Reached target machines.target. Dec 13 02:01:22.259383 systemd[1]: Starting ldconfig.service... Dec 13 02:01:22.267713 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:01:22.267811 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:01:22.269695 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:01:22.278386 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:01:22.290278 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:01:22.300744 systemd[1]: Starting systemd-sysext.service... Dec 13 02:01:22.301695 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1051 (bootctl) Dec 13 02:01:22.304902 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:01:22.324747 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:01:22.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:22.328377 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:01:22.337848 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:01:22.338128 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:01:22.359626 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 02:01:22.462693 systemd-fsck[1061]: fsck.fat 4.2 (2021-01-31) Dec 13 02:01:22.462693 systemd-fsck[1061]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 02:01:22.464356 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:01:22.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:22.476541 systemd[1]: Mounting boot.mount... Dec 13 02:01:22.510944 systemd[1]: Mounted boot.mount. Dec 13 02:01:22.535284 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:01:22.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:22.678018 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:01:22.678978 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:01:22.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:22.702643 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:01:22.730643 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 02:01:22.756708 (sd-sysext)[1067]: Using extensions 'kubernetes'. Dec 13 02:01:22.757408 (sd-sysext)[1067]: Merged extensions into '/usr'. Dec 13 02:01:22.781478 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:01:22.784128 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:01:22.794029 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:01:22.796169 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:01:22.804665 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:01:22.813650 systemd[1]: Starting modprobe@loop.service... Dec 13 02:01:22.821800 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:01:22.822048 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:01:22.822278 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:01:22.827376 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:01:22.835575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:01:22.835841 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:01:22.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:22.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:22.845527 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:01:22.845842 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:01:22.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:22.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:22.856466 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:01:22.856712 systemd[1]: Finished modprobe@loop.service. Dec 13 02:01:22.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:22.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:22.865520 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:01:22.865743 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:01:22.868456 systemd[1]: Finished systemd-sysext.service. Dec 13 02:01:22.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:22.878812 systemd[1]: Starting ensure-sysext.service... Dec 13 02:01:22.888490 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:01:22.900864 systemd[1]: Reloading. Dec 13 02:01:22.931672 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:01:22.941021 ldconfig[1050]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:01:22.946638 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:01:22.963650 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:01:23.026329 /usr/lib/systemd/system-generators/torcx-generator[1093]: time="2024-12-13T02:01:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:01:23.026378 /usr/lib/systemd/system-generators/torcx-generator[1093]: time="2024-12-13T02:01:23Z" level=info msg="torcx already run" Dec 13 02:01:23.169769 systemd-networkd[1025]: eth0: Gained IPv6LL Dec 13 02:01:23.175114 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:01:23.175147 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:01:23.215340 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:01:23.298000 audit: BPF prog-id=30 op=LOAD Dec 13 02:01:23.298000 audit: BPF prog-id=21 op=UNLOAD Dec 13 02:01:23.299000 audit: BPF prog-id=31 op=LOAD Dec 13 02:01:23.299000 audit: BPF prog-id=32 op=LOAD Dec 13 02:01:23.299000 audit: BPF prog-id=22 op=UNLOAD Dec 13 02:01:23.299000 audit: BPF prog-id=23 op=UNLOAD Dec 13 02:01:23.299000 audit: BPF prog-id=33 op=LOAD Dec 13 02:01:23.299000 audit: BPF prog-id=34 op=LOAD Dec 13 02:01:23.299000 audit: BPF prog-id=24 op=UNLOAD Dec 13 02:01:23.299000 audit: BPF prog-id=25 op=UNLOAD Dec 13 02:01:23.301000 audit: BPF prog-id=35 op=LOAD Dec 13 02:01:23.301000 audit: BPF prog-id=26 op=UNLOAD Dec 13 02:01:23.305000 audit: BPF prog-id=36 op=LOAD Dec 13 02:01:23.305000 audit: BPF prog-id=27 op=UNLOAD Dec 13 02:01:23.305000 audit: BPF prog-id=37 op=LOAD Dec 13 02:01:23.305000 audit: BPF prog-id=38 op=LOAD Dec 13 02:01:23.305000 audit: BPF prog-id=28 op=UNLOAD Dec 13 02:01:23.305000 audit: BPF prog-id=29 op=UNLOAD Dec 13 02:01:23.310861 systemd[1]: Finished ldconfig.service. Dec 13 02:01:23.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:23.319670 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:01:23.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:23.333903 systemd[1]: Starting audit-rules.service... Dec 13 02:01:23.342739 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:01:23.351306 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:01:23.363250 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:01:23.371000 audit: BPF prog-id=39 op=LOAD Dec 13 02:01:23.374534 systemd[1]: Starting systemd-resolved.service... Dec 13 02:01:23.381000 audit: BPF prog-id=40 op=LOAD Dec 13 02:01:23.384845 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:01:23.394012 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:01:23.406111 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:01:23.411000 audit[1165]: SYSTEM_BOOT pid=1165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 02:01:23.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:01:23.415433 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:01:23.415702 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:01:23.418000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:01:23.418000 audit[1168]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcaaec6f20 a2=420 a3=0 items=0 ppid=1138 pid=1168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:01:23.418000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:01:23.420339 augenrules[1168]: No rules Dec 13 02:01:23.425376 systemd[1]: Finished audit-rules.service. Dec 13 02:01:23.433283 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:01:23.449984 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:01:23.462003 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:01:23.462513 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:01:23.465019 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:01:23.473878 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:01:23.482788 systemd[1]: Starting modprobe@loop.service... Dec 13 02:01:23.491833 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:01:23.500789 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:01:23.501049 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:01:23.503547 systemd[1]: Starting systemd-update-done.service... Dec 13 02:01:23.505215 enable-oslogin[1176]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 02:01:23.510713 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:01:23.510938 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:01:23.513334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:01:23.513555 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:01:23.522428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:01:23.522650 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:01:23.532405 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:01:23.532617 systemd[1]: Finished modprobe@loop.service. Dec 13 02:01:23.541455 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:01:23.541711 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:01:23.550398 systemd[1]: Finished systemd-update-done.service. Dec 13 02:01:23.559450 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:01:23.559672 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:01:23.564692 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:01:23.565159 systemd-resolved[1155]: Positive Trust Anchors: Dec 13 02:01:23.565492 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:01:23.565665 systemd-resolved[1155]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:01:23.565853 systemd-resolved[1155]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:01:23.568748 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:01:23.578621 systemd[1]: Starting modprobe@drm.service... Dec 13 02:01:23.580450 systemd-timesyncd[1160]: Contacted time server 169.254.169.254:123 (169.254.169.254). Dec 13 02:01:23.580954 systemd-timesyncd[1160]: Initial clock synchronization to Fri 2024-12-13 02:01:23.870305 UTC. Dec 13 02:01:23.587363 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:01:23.596616 systemd[1]: Starting modprobe@loop.service... Dec 13 02:01:23.605576 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:01:23.606746 systemd-resolved[1155]: Defaulting to hostname 'linux'. Dec 13 02:01:23.611375 enable-oslogin[1183]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 02:01:23.613841 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:01:23.614060 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:01:23.615998 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:01:23.624778 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:01:23.625022 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:01:23.626385 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:01:23.636799 systemd[1]: Started systemd-resolved.service. Dec 13 02:01:23.646391 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:01:23.646669 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:01:23.655373 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:01:23.655616 systemd[1]: Finished modprobe@drm.service. Dec 13 02:01:23.665366 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:01:23.665589 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:01:23.674396 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:01:23.674650 systemd[1]: Finished modprobe@loop.service. Dec 13 02:01:23.683456 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:01:23.683770 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:01:23.693489 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:01:23.704907 systemd[1]: Reached target network.target. Dec 13 02:01:23.713797 systemd[1]: Reached target network-online.target. Dec 13 02:01:23.722837 systemd[1]: Reached target nss-lookup.target. Dec 13 02:01:23.731832 systemd[1]: Reached target time-set.target. Dec 13 02:01:23.740843 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:01:23.740914 systemd[1]: Reached target sysinit.target. Dec 13 02:01:23.749920 systemd[1]: Started motdgen.path. Dec 13 02:01:23.756868 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:01:23.767059 systemd[1]: Started logrotate.timer. Dec 13 02:01:23.773921 systemd[1]: Started mdadm.timer. Dec 13 02:01:23.780795 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:01:23.789810 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:01:23.789879 systemd[1]: Reached target paths.target. Dec 13 02:01:23.796793 systemd[1]: Reached target timers.target. Dec 13 02:01:23.804239 systemd[1]: Listening on dbus.socket. Dec 13 02:01:23.813251 systemd[1]: Starting docker.socket... Dec 13 02:01:23.825702 systemd[1]: Listening on sshd.socket. Dec 13 02:01:23.832952 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:01:23.833061 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:01:23.834078 systemd[1]: Finished ensure-sysext.service. Dec 13 02:01:23.843008 systemd[1]: Listening on docker.socket. Dec 13 02:01:23.850931 systemd[1]: Reached target sockets.target. Dec 13 02:01:23.859753 systemd[1]: Reached target basic.target. Dec 13 02:01:23.866790 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:01:23.866837 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:01:23.868472 systemd[1]: Starting containerd.service... Dec 13 02:01:23.877112 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 02:01:23.889019 systemd[1]: Starting dbus.service... Dec 13 02:01:23.899556 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:01:23.909128 systemd[1]: Starting extend-filesystems.service... Dec 13 02:01:23.917080 jq[1190]: false Dec 13 02:01:23.915760 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:01:23.918196 systemd[1]: Starting kubelet.service... Dec 13 02:01:23.926350 systemd[1]: Starting motdgen.service... Dec 13 02:01:23.933074 systemd[1]: Starting oem-gce.service... Dec 13 02:01:23.939851 systemd[1]: Starting prepare-helm.service... Dec 13 02:01:23.949459 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:01:23.958583 systemd[1]: Starting sshd-keygen.service... Dec 13 02:01:23.969766 systemd[1]: Starting systemd-logind.service... Dec 13 02:01:23.976770 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:01:23.976891 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 13 02:01:23.977706 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:01:23.979001 systemd[1]: Starting update-engine.service... Dec 13 02:01:23.987962 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:01:23.996147 jq[1213]: true Dec 13 02:01:24.000249 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:01:24.000643 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:01:24.006861 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:01:24.007196 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:01:24.039268 mkfs.ext4[1220]: mke2fs 1.46.5 (30-Dec-2021) Dec 13 02:01:24.047072 mkfs.ext4[1220]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Dec 13 02:01:24.047232 mkfs.ext4[1220]: Creating filesystem with 262144 4k blocks and 65536 inodes Dec 13 02:01:24.047232 mkfs.ext4[1220]: Filesystem UUID: cde76ec2-c724-49cd-a9f7-3cc608ef7681 Dec 13 02:01:24.047232 mkfs.ext4[1220]: Superblock backups stored on blocks: Dec 13 02:01:24.047232 mkfs.ext4[1220]: 32768, 98304, 163840, 229376 Dec 13 02:01:24.047232 mkfs.ext4[1220]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 02:01:24.047473 mkfs.ext4[1220]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 02:01:24.048283 mkfs.ext4[1220]: Creating journal (8192 blocks): done Dec 13 02:01:24.053382 extend-filesystems[1191]: Found loop1 Dec 13 02:01:24.058545 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:01:24.065450 mkfs.ext4[1220]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 02:01:24.058922 systemd[1]: Finished motdgen.service. Dec 13 02:01:24.066185 extend-filesystems[1191]: Found sda Dec 13 02:01:24.081353 extend-filesystems[1191]: Found sda1 Dec 13 02:01:24.081353 extend-filesystems[1191]: Found sda2 Dec 13 02:01:24.081353 extend-filesystems[1191]: Found sda3 Dec 13 02:01:24.081353 extend-filesystems[1191]: Found usr Dec 13 02:01:24.081353 extend-filesystems[1191]: Found sda4 Dec 13 02:01:24.081353 extend-filesystems[1191]: Found sda6 Dec 13 02:01:24.081353 extend-filesystems[1191]: Found sda7 Dec 13 02:01:24.081353 extend-filesystems[1191]: Found sda9 Dec 13 02:01:24.081353 extend-filesystems[1191]: Checking size of /dev/sda9 Dec 13 02:01:24.206294 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 02:01:24.206361 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Dec 13 02:01:24.206446 jq[1218]: true Dec 13 02:01:24.206612 tar[1217]: linux-amd64/helm Dec 13 02:01:24.126884 dbus-daemon[1189]: [system] SELinux support is enabled Dec 13 02:01:24.127208 systemd[1]: Started dbus.service. Dec 13 02:01:24.207435 extend-filesystems[1191]: Resized partition /dev/sda9 Dec 13 02:01:24.161730 dbus-daemon[1189]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1025 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 02:01:24.147942 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:01:24.222959 extend-filesystems[1238]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 02:01:24.183963 dbus-daemon[1189]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 02:01:24.148005 systemd[1]: Reached target system-config.target. Dec 13 02:01:24.232461 umount[1228]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Dec 13 02:01:24.163146 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:01:24.163189 systemd[1]: Reached target user-config.target. Dec 13 02:01:24.190687 systemd[1]: Starting systemd-hostnamed.service... Dec 13 02:01:24.250488 update_engine[1211]: I1213 02:01:24.250399 1211 main.cc:92] Flatcar Update Engine starting Dec 13 02:01:24.261957 systemd[1]: Started update-engine.service. Dec 13 02:01:24.262659 update_engine[1211]: I1213 02:01:24.262519 1211 update_check_scheduler.cc:74] Next update check in 9m51s Dec 13 02:01:24.274666 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Dec 13 02:01:24.279754 systemd[1]: Started locksmithd.service. Dec 13 02:01:24.316625 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:01:24.318291 extend-filesystems[1238]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 02:01:24.318291 extend-filesystems[1238]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 13 02:01:24.318291 extend-filesystems[1238]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Dec 13 02:01:24.388042 extend-filesystems[1191]: Resized filesystem in /dev/sda9 Dec 13 02:01:24.319809 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:01:24.397064 bash[1253]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:01:24.320088 systemd[1]: Finished extend-filesystems.service. Dec 13 02:01:24.344387 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:01:24.403126 env[1219]: time="2024-12-13T02:01:24.403055178Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:01:24.464825 dbus-daemon[1189]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 02:01:24.465069 systemd[1]: Started systemd-hostnamed.service. Dec 13 02:01:24.465851 dbus-daemon[1189]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1246 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 02:01:24.483106 systemd[1]: Starting polkit.service... Dec 13 02:01:24.487426 systemd-logind[1208]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:01:24.487478 systemd-logind[1208]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 02:01:24.487514 systemd-logind[1208]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:01:24.494939 systemd-logind[1208]: New seat seat0. Dec 13 02:01:24.503132 systemd[1]: Started systemd-logind.service. Dec 13 02:01:24.636757 coreos-metadata[1188]: Dec 13 02:01:24.636 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 13 02:01:24.644816 polkitd[1262]: Started polkitd version 121 Dec 13 02:01:24.660251 coreos-metadata[1188]: Dec 13 02:01:24.660 INFO Fetch failed with 404: resource not found Dec 13 02:01:24.660432 coreos-metadata[1188]: Dec 13 02:01:24.660 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 13 02:01:24.661843 coreos-metadata[1188]: Dec 13 02:01:24.661 INFO Fetch successful Dec 13 02:01:24.661843 coreos-metadata[1188]: Dec 13 02:01:24.661 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 13 02:01:24.662913 coreos-metadata[1188]: Dec 13 02:01:24.662 INFO Fetch failed with 404: resource not found Dec 13 02:01:24.663049 coreos-metadata[1188]: Dec 13 02:01:24.662 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 13 02:01:24.664965 coreos-metadata[1188]: Dec 13 02:01:24.664 INFO Fetch failed with 404: resource not found Dec 13 02:01:24.665074 coreos-metadata[1188]: Dec 13 02:01:24.664 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 13 02:01:24.666584 coreos-metadata[1188]: Dec 13 02:01:24.666 INFO Fetch successful Dec 13 02:01:24.670281 unknown[1188]: wrote ssh authorized keys file for user: core Dec 13 02:01:24.672887 env[1219]: time="2024-12-13T02:01:24.672838731Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:01:24.676885 env[1219]: time="2024-12-13T02:01:24.676842643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:01:24.680347 env[1219]: time="2024-12-13T02:01:24.680296476Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:01:24.681356 env[1219]: time="2024-12-13T02:01:24.681323858Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:01:24.681907 env[1219]: time="2024-12-13T02:01:24.681869706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:01:24.683563 env[1219]: time="2024-12-13T02:01:24.683530689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:01:24.683732 env[1219]: time="2024-12-13T02:01:24.683704139Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:01:24.683855 env[1219]: time="2024-12-13T02:01:24.683830398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:01:24.684096 env[1219]: time="2024-12-13T02:01:24.684068112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:01:24.684967 env[1219]: time="2024-12-13T02:01:24.684936069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:01:24.685359 env[1219]: time="2024-12-13T02:01:24.685321482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:01:24.685930 env[1219]: time="2024-12-13T02:01:24.685903171Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:01:24.688148 env[1219]: time="2024-12-13T02:01:24.688105236Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:01:24.688322 env[1219]: time="2024-12-13T02:01:24.688293777Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:01:24.693185 polkitd[1262]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 02:01:24.693686 polkitd[1262]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 02:01:24.696463 polkitd[1262]: Finished loading, compiling and executing 2 rules Dec 13 02:01:24.697331 dbus-daemon[1189]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 02:01:24.697587 systemd[1]: Started polkit.service. Dec 13 02:01:24.698118 env[1219]: time="2024-12-13T02:01:24.697944804Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:01:24.698118 env[1219]: time="2024-12-13T02:01:24.698017205Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:01:24.698118 env[1219]: time="2024-12-13T02:01:24.698041398Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:01:24.698118 env[1219]: time="2024-12-13T02:01:24.698092604Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:01:24.698118 env[1219]: time="2024-12-13T02:01:24.698116716Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:01:24.698366 env[1219]: time="2024-12-13T02:01:24.698138775Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:01:24.698366 env[1219]: time="2024-12-13T02:01:24.698160654Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:01:24.698366 env[1219]: time="2024-12-13T02:01:24.698187013Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:01:24.698366 env[1219]: time="2024-12-13T02:01:24.698209785Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:01:24.698366 env[1219]: time="2024-12-13T02:01:24.698235294Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:01:24.698366 env[1219]: time="2024-12-13T02:01:24.698259036Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:01:24.698366 env[1219]: time="2024-12-13T02:01:24.698281856Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:01:24.698705 env[1219]: time="2024-12-13T02:01:24.698451962Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:01:24.698705 env[1219]: time="2024-12-13T02:01:24.698590260Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:01:24.699056 env[1219]: time="2024-12-13T02:01:24.699021800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:01:24.699168 env[1219]: time="2024-12-13T02:01:24.699078822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:01:24.699168 env[1219]: time="2024-12-13T02:01:24.699104150Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:01:24.699414 env[1219]: time="2024-12-13T02:01:24.699191301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:01:24.699414 env[1219]: time="2024-12-13T02:01:24.699217991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:01:24.699414 env[1219]: time="2024-12-13T02:01:24.699239754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:01:24.699414 env[1219]: time="2024-12-13T02:01:24.699260866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:01:24.699414 env[1219]: time="2024-12-13T02:01:24.699281503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:01:24.699414 env[1219]: time="2024-12-13T02:01:24.699304776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:01:24.699414 env[1219]: time="2024-12-13T02:01:24.699324931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:01:24.699414 env[1219]: time="2024-12-13T02:01:24.699345125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:01:24.699833 env[1219]: time="2024-12-13T02:01:24.699369251Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:01:24.699833 env[1219]: time="2024-12-13T02:01:24.699784083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:01:24.699942 env[1219]: time="2024-12-13T02:01:24.699850402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:01:24.699942 env[1219]: time="2024-12-13T02:01:24.699879168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:01:24.699942 env[1219]: time="2024-12-13T02:01:24.699902901Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:01:24.700076 env[1219]: time="2024-12-13T02:01:24.699952137Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:01:24.700076 env[1219]: time="2024-12-13T02:01:24.699975683Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:01:24.700076 env[1219]: time="2024-12-13T02:01:24.700024997Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:01:24.700232 env[1219]: time="2024-12-13T02:01:24.700079941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:01:24.700587 polkitd[1262]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 02:01:24.715967 env[1219]: time="2024-12-13T02:01:24.715824678Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:01:24.719833 env[1219]: time="2024-12-13T02:01:24.715993606Z" level=info msg="Connect containerd service" Dec 13 02:01:24.719833 env[1219]: time="2024-12-13T02:01:24.716084425Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:01:24.731881 update-ssh-keys[1265]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:01:24.732702 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 02:01:24.752212 systemd-hostnamed[1246]: Hostname set to (transient) Dec 13 02:01:24.759619 env[1219]: time="2024-12-13T02:01:24.755416853Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:01:24.759619 env[1219]: time="2024-12-13T02:01:24.755547517Z" level=info msg="Start subscribing containerd event" Dec 13 02:01:24.759619 env[1219]: time="2024-12-13T02:01:24.755629071Z" level=info msg="Start recovering state" Dec 13 02:01:24.759619 env[1219]: time="2024-12-13T02:01:24.755738987Z" level=info msg="Start event monitor" Dec 13 02:01:24.759619 env[1219]: time="2024-12-13T02:01:24.755757937Z" level=info msg="Start snapshots syncer" Dec 13 02:01:24.759619 env[1219]: time="2024-12-13T02:01:24.755772656Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:01:24.759619 env[1219]: time="2024-12-13T02:01:24.755785143Z" level=info msg="Start streaming server" Dec 13 02:01:24.759619 env[1219]: time="2024-12-13T02:01:24.756529163Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:01:24.759619 env[1219]: time="2024-12-13T02:01:24.756700530Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:01:24.759347 systemd[1]: Started containerd.service. Dec 13 02:01:24.761297 systemd-resolved[1155]: System hostname changed to 'ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal'. Dec 13 02:01:24.796539 env[1219]: time="2024-12-13T02:01:24.796113013Z" level=info msg="containerd successfully booted in 0.394349s" Dec 13 02:01:25.872816 tar[1217]: linux-amd64/LICENSE Dec 13 02:01:25.873442 tar[1217]: linux-amd64/README.md Dec 13 02:01:25.893556 systemd[1]: Finished prepare-helm.service. Dec 13 02:01:26.273521 systemd[1]: Started kubelet.service. Dec 13 02:01:27.539562 locksmithd[1254]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:01:27.882062 kubelet[1277]: E1213 02:01:27.881908 1277 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:01:27.885441 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:01:27.885707 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:01:27.886068 systemd[1]: kubelet.service: Consumed 1.527s CPU time. Dec 13 02:01:31.310017 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Dec 13 02:01:31.715774 sshd_keygen[1222]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:01:31.760110 systemd[1]: Finished sshd-keygen.service. Dec 13 02:01:31.771506 systemd[1]: Starting issuegen.service... Dec 13 02:01:31.784371 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:01:31.784735 systemd[1]: Finished issuegen.service. Dec 13 02:01:31.796189 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:01:31.806070 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:01:31.817200 systemd[1]: Started getty@tty1.service. Dec 13 02:01:31.827429 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 02:01:31.836304 systemd[1]: Reached target getty.target. Dec 13 02:01:32.677126 systemd[1]: Created slice system-sshd.slice. Dec 13 02:01:32.687877 systemd[1]: Started sshd@0-10.128.0.4:22-139.178.68.195:45576.service. Dec 13 02:01:33.055767 sshd[1304]: Accepted publickey for core from 139.178.68.195 port 45576 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:01:33.060446 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:33.078841 systemd[1]: Created slice user-500.slice. Dec 13 02:01:33.088046 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:01:33.098700 systemd-logind[1208]: New session 1 of user core. Dec 13 02:01:33.105374 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:01:33.116210 systemd[1]: Starting user@500.service... Dec 13 02:01:33.135634 (systemd)[1307]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:33.309104 systemd[1307]: Queued start job for default target default.target. Dec 13 02:01:33.309917 systemd[1307]: Reached target paths.target. Dec 13 02:01:33.309954 systemd[1307]: Reached target sockets.target. Dec 13 02:01:33.309976 systemd[1307]: Reached target timers.target. Dec 13 02:01:33.309997 systemd[1307]: Reached target basic.target. Dec 13 02:01:33.310082 systemd[1307]: Reached target default.target. Dec 13 02:01:33.310139 systemd[1307]: Startup finished in 165ms. Dec 13 02:01:33.310841 systemd[1]: Started user@500.service. Dec 13 02:01:33.319422 systemd[1]: Started session-1.scope. Dec 13 02:01:33.555317 systemd[1]: Started sshd@1-10.128.0.4:22-139.178.68.195:45584.service. Dec 13 02:01:33.583661 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 02:01:33.611315 systemd-nspawn[1315]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Dec 13 02:01:33.611315 systemd-nspawn[1315]: Press ^] three times within 1s to kill container. Dec 13 02:01:33.626641 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:01:33.647320 systemd[1]: tmp-unifiedh7IxEp.mount: Deactivated successfully. Dec 13 02:01:33.718172 systemd[1]: Started oem-gce.service. Dec 13 02:01:33.726175 systemd[1]: Reached target multi-user.target. Dec 13 02:01:33.737019 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:01:33.750768 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:01:33.751026 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:01:33.761019 systemd[1]: Startup finished in 1.025s (kernel) + 8.309s (initrd) + 17.649s (userspace) = 26.984s. Dec 13 02:01:33.788417 systemd-nspawn[1315]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 13 02:01:33.788417 systemd-nspawn[1315]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 13 02:01:33.788743 systemd-nspawn[1315]: + /usr/bin/google_instance_setup Dec 13 02:01:33.870421 sshd[1318]: Accepted publickey for core from 139.178.68.195 port 45584 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:01:33.872020 sshd[1318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:33.879712 systemd-logind[1208]: New session 2 of user core. Dec 13 02:01:33.880218 systemd[1]: Started session-2.scope. Dec 13 02:01:34.094930 sshd[1318]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:34.101874 systemd-logind[1208]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:01:34.104748 systemd[1]: sshd@1-10.128.0.4:22-139.178.68.195:45584.service: Deactivated successfully. Dec 13 02:01:34.105802 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:01:34.107183 systemd-logind[1208]: Removed session 2. Dec 13 02:01:34.140999 systemd[1]: Started sshd@2-10.128.0.4:22-139.178.68.195:45592.service. Dec 13 02:01:34.447095 sshd[1329]: Accepted publickey for core from 139.178.68.195 port 45592 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:01:34.446917 instance-setup[1324]: INFO Running google_set_multiqueue. Dec 13 02:01:34.448445 sshd[1329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:34.455995 systemd-logind[1208]: New session 3 of user core. Dec 13 02:01:34.456193 systemd[1]: Started session-3.scope. Dec 13 02:01:34.474579 instance-setup[1324]: INFO Set channels for eth0 to 2. Dec 13 02:01:34.478268 instance-setup[1324]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 13 02:01:34.479981 instance-setup[1324]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 13 02:01:34.480508 instance-setup[1324]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 13 02:01:34.482111 instance-setup[1324]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 13 02:01:34.482629 instance-setup[1324]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 13 02:01:34.484198 instance-setup[1324]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 13 02:01:34.484723 instance-setup[1324]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 13 02:01:34.486265 instance-setup[1324]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 13 02:01:34.499579 instance-setup[1324]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 13 02:01:34.499786 instance-setup[1324]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 13 02:01:34.542905 systemd-nspawn[1315]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 13 02:01:34.658074 sshd[1329]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:34.665036 systemd[1]: sshd@2-10.128.0.4:22-139.178.68.195:45592.service: Deactivated successfully. Dec 13 02:01:34.666176 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:01:34.668575 systemd-logind[1208]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:01:34.670266 systemd-logind[1208]: Removed session 3. Dec 13 02:01:34.705813 systemd[1]: Started sshd@3-10.128.0.4:22-139.178.68.195:45606.service. Dec 13 02:01:34.900094 startup-script[1362]: INFO Starting startup scripts. Dec 13 02:01:34.916130 startup-script[1362]: INFO No startup scripts found in metadata. Dec 13 02:01:34.916312 startup-script[1362]: INFO Finished running startup scripts. Dec 13 02:01:34.950843 systemd-nspawn[1315]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 13 02:01:34.950843 systemd-nspawn[1315]: + daemon_pids=() Dec 13 02:01:34.950843 systemd-nspawn[1315]: + for d in accounts clock_skew network Dec 13 02:01:34.952009 systemd-nspawn[1315]: + daemon_pids+=($!) Dec 13 02:01:34.952009 systemd-nspawn[1315]: + for d in accounts clock_skew network Dec 13 02:01:34.952930 systemd-nspawn[1315]: + daemon_pids+=($!) Dec 13 02:01:34.952930 systemd-nspawn[1315]: + for d in accounts clock_skew network Dec 13 02:01:34.953211 systemd-nspawn[1315]: + daemon_pids+=($!) Dec 13 02:01:34.953376 systemd-nspawn[1315]: + NOTIFY_SOCKET=/run/systemd/notify Dec 13 02:01:34.953543 systemd-nspawn[1315]: + /usr/bin/systemd-notify --ready Dec 13 02:01:34.954425 systemd-nspawn[1315]: + /usr/bin/google_clock_skew_daemon Dec 13 02:01:34.954641 systemd-nspawn[1315]: + /usr/bin/google_network_daemon Dec 13 02:01:34.955523 systemd-nspawn[1315]: + /usr/bin/google_accounts_daemon Dec 13 02:01:35.011025 systemd-nspawn[1315]: + wait -n 36 37 38 Dec 13 02:01:35.016488 sshd[1366]: Accepted publickey for core from 139.178.68.195 port 45606 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:01:35.017590 sshd[1366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:35.028646 systemd[1]: Started session-4.scope. Dec 13 02:01:35.031187 systemd-logind[1208]: New session 4 of user core. Dec 13 02:01:35.240545 sshd[1366]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:35.244948 systemd[1]: sshd@3-10.128.0.4:22-139.178.68.195:45606.service: Deactivated successfully. Dec 13 02:01:35.246090 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:01:35.248526 systemd-logind[1208]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:01:35.250357 systemd-logind[1208]: Removed session 4. Dec 13 02:01:35.286160 systemd[1]: Started sshd@4-10.128.0.4:22-139.178.68.195:45620.service. Dec 13 02:01:35.600481 sshd[1378]: Accepted publickey for core from 139.178.68.195 port 45620 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:01:35.602076 sshd[1378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:35.610586 systemd[1]: Started session-5.scope. Dec 13 02:01:35.613194 systemd-logind[1208]: New session 5 of user core. Dec 13 02:01:35.687956 google-networking[1372]: INFO Starting Google Networking daemon. Dec 13 02:01:35.729287 google-clock-skew[1371]: INFO Starting Google Clock Skew daemon. Dec 13 02:01:35.742566 google-clock-skew[1371]: INFO Clock drift token has changed: 0. Dec 13 02:01:35.747400 systemd-nspawn[1315]: hwclock: Cannot access the Hardware Clock via any known method. Dec 13 02:01:35.747731 systemd-nspawn[1315]: hwclock: Use the --verbose option to see the details of our search for an access method. Dec 13 02:01:35.748680 google-clock-skew[1371]: WARNING Failed to sync system time with hardware clock. Dec 13 02:01:35.805310 sudo[1389]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:01:35.806315 sudo[1389]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:01:35.821508 groupadd[1390]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 13 02:01:35.825735 groupadd[1390]: group added to /etc/gshadow: name=google-sudoers Dec 13 02:01:35.830634 groupadd[1390]: new group: name=google-sudoers, GID=1000 Dec 13 02:01:35.849132 google-accounts[1370]: INFO Starting Google Accounts daemon. Dec 13 02:01:35.850363 systemd[1]: Starting docker.service... Dec 13 02:01:35.884650 google-accounts[1370]: WARNING OS Login not installed. Dec 13 02:01:35.886119 google-accounts[1370]: INFO Creating a new user account for 0. Dec 13 02:01:35.892850 systemd-nspawn[1315]: useradd: invalid user name '0': use --badname to ignore Dec 13 02:01:35.893853 google-accounts[1370]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 13 02:01:35.916121 env[1406]: time="2024-12-13T02:01:35.916072942Z" level=info msg="Starting up" Dec 13 02:01:35.920398 env[1406]: time="2024-12-13T02:01:35.920353291Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:01:35.920398 env[1406]: time="2024-12-13T02:01:35.920389860Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:01:35.920584 env[1406]: time="2024-12-13T02:01:35.920424917Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:01:35.920584 env[1406]: time="2024-12-13T02:01:35.920446109Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:01:35.924932 env[1406]: time="2024-12-13T02:01:35.924890622Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:01:35.924932 env[1406]: time="2024-12-13T02:01:35.924923995Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:01:35.925146 env[1406]: time="2024-12-13T02:01:35.924952103Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:01:35.925146 env[1406]: time="2024-12-13T02:01:35.924966855Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:01:35.934754 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3088990889-merged.mount: Deactivated successfully. Dec 13 02:01:35.967795 env[1406]: time="2024-12-13T02:01:35.967736370Z" level=info msg="Loading containers: start." Dec 13 02:01:36.142633 kernel: Initializing XFRM netlink socket Dec 13 02:01:36.189972 env[1406]: time="2024-12-13T02:01:36.189902903Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 02:01:36.277941 systemd-networkd[1025]: docker0: Link UP Dec 13 02:01:36.297454 env[1406]: time="2024-12-13T02:01:36.297390591Z" level=info msg="Loading containers: done." Dec 13 02:01:36.316455 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2602711377-merged.mount: Deactivated successfully. Dec 13 02:01:36.320747 env[1406]: time="2024-12-13T02:01:36.320690964Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:01:36.321032 env[1406]: time="2024-12-13T02:01:36.320988381Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 02:01:36.321193 env[1406]: time="2024-12-13T02:01:36.321147945Z" level=info msg="Daemon has completed initialization" Dec 13 02:01:36.344038 systemd[1]: Started docker.service. Dec 13 02:01:36.352533 env[1406]: time="2024-12-13T02:01:36.352469638Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:01:37.589097 env[1219]: time="2024-12-13T02:01:37.588706350Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 02:01:38.043144 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:01:38.043401 systemd[1]: Stopped kubelet.service. Dec 13 02:01:38.043474 systemd[1]: kubelet.service: Consumed 1.527s CPU time. Dec 13 02:01:38.048018 systemd[1]: Starting kubelet.service... Dec 13 02:01:38.056653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3937137538.mount: Deactivated successfully. Dec 13 02:01:38.314678 systemd[1]: Started kubelet.service. Dec 13 02:01:38.414329 kubelet[1541]: E1213 02:01:38.414263 1541 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:01:38.420914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:01:38.421152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:01:40.307303 env[1219]: time="2024-12-13T02:01:40.307230167Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:40.310579 env[1219]: time="2024-12-13T02:01:40.310508918Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:40.313475 env[1219]: time="2024-12-13T02:01:40.313427179Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:40.316022 env[1219]: time="2024-12-13T02:01:40.315976207Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:40.317281 env[1219]: time="2024-12-13T02:01:40.317226338Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 02:01:40.331980 env[1219]: time="2024-12-13T02:01:40.331926133Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 02:01:42.228614 env[1219]: time="2024-12-13T02:01:42.228543163Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:42.232209 env[1219]: time="2024-12-13T02:01:42.232137798Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:42.235591 env[1219]: time="2024-12-13T02:01:42.235528953Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:42.239199 env[1219]: time="2024-12-13T02:01:42.239133066Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:42.240709 env[1219]: time="2024-12-13T02:01:42.240656541Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 02:01:42.255443 env[1219]: time="2024-12-13T02:01:42.255374881Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 02:01:43.675084 env[1219]: time="2024-12-13T02:01:43.675014484Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:43.678313 env[1219]: time="2024-12-13T02:01:43.678245482Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:43.681017 env[1219]: time="2024-12-13T02:01:43.680972770Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:43.684224 env[1219]: time="2024-12-13T02:01:43.684163222Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:43.685406 env[1219]: time="2024-12-13T02:01:43.685357481Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 02:01:43.701645 env[1219]: time="2024-12-13T02:01:43.701585594Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 02:01:44.817939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4065249802.mount: Deactivated successfully. Dec 13 02:01:45.505369 env[1219]: time="2024-12-13T02:01:45.505299177Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:45.508227 env[1219]: time="2024-12-13T02:01:45.508173183Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:45.510236 env[1219]: time="2024-12-13T02:01:45.510193802Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:45.512404 env[1219]: time="2024-12-13T02:01:45.512366257Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:45.513035 env[1219]: time="2024-12-13T02:01:45.512988330Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 02:01:45.527325 env[1219]: time="2024-12-13T02:01:45.527273168Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:01:45.948031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount342316276.mount: Deactivated successfully. Dec 13 02:01:47.129386 env[1219]: time="2024-12-13T02:01:47.129313495Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:47.132612 env[1219]: time="2024-12-13T02:01:47.132541106Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:47.136105 env[1219]: time="2024-12-13T02:01:47.136046596Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:47.138968 env[1219]: time="2024-12-13T02:01:47.138912502Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:47.140372 env[1219]: time="2024-12-13T02:01:47.140324963Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 02:01:47.159245 env[1219]: time="2024-12-13T02:01:47.159193740Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 02:01:47.555583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount261984336.mount: Deactivated successfully. Dec 13 02:01:47.561763 env[1219]: time="2024-12-13T02:01:47.561683776Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:47.564268 env[1219]: time="2024-12-13T02:01:47.564222956Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:47.566682 env[1219]: time="2024-12-13T02:01:47.566642908Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:47.569361 env[1219]: time="2024-12-13T02:01:47.569320296Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:47.570296 env[1219]: time="2024-12-13T02:01:47.570243582Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 02:01:47.584392 env[1219]: time="2024-12-13T02:01:47.584345842Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 02:01:48.021391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount834830761.mount: Deactivated successfully. Dec 13 02:01:48.672249 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:01:48.672562 systemd[1]: Stopped kubelet.service. Dec 13 02:01:48.675365 systemd[1]: Starting kubelet.service... Dec 13 02:01:49.401497 systemd[1]: Started kubelet.service. Dec 13 02:01:49.478532 kubelet[1583]: E1213 02:01:49.478468 1583 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:01:49.481949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:01:49.482166 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:01:51.394449 env[1219]: time="2024-12-13T02:01:51.394374455Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:51.397724 env[1219]: time="2024-12-13T02:01:51.397673249Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:51.400556 env[1219]: time="2024-12-13T02:01:51.400508666Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:51.402939 env[1219]: time="2024-12-13T02:01:51.402898473Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:51.403976 env[1219]: time="2024-12-13T02:01:51.403925532Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 02:01:54.784297 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 02:01:55.239168 systemd[1]: Stopped kubelet.service. Dec 13 02:01:55.242265 systemd[1]: Starting kubelet.service... Dec 13 02:01:55.275768 systemd[1]: Reloading. Dec 13 02:01:55.418765 /usr/lib/systemd/system-generators/torcx-generator[1680]: time="2024-12-13T02:01:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:01:55.418811 /usr/lib/systemd/system-generators/torcx-generator[1680]: time="2024-12-13T02:01:55Z" level=info msg="torcx already run" Dec 13 02:01:55.545883 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:01:55.545911 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:01:55.570048 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:01:55.716785 systemd[1]: Started kubelet.service. Dec 13 02:01:55.724832 systemd[1]: Stopping kubelet.service... Dec 13 02:01:55.726372 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:01:55.726654 systemd[1]: Stopped kubelet.service. Dec 13 02:01:55.729066 systemd[1]: Starting kubelet.service... Dec 13 02:01:55.936748 systemd[1]: Started kubelet.service. Dec 13 02:01:56.013337 kubelet[1736]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:01:56.013809 kubelet[1736]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:01:56.013879 kubelet[1736]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:01:56.014058 kubelet[1736]: I1213 02:01:56.014019 1736 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:01:56.379935 kubelet[1736]: I1213 02:01:56.379445 1736 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:01:56.379935 kubelet[1736]: I1213 02:01:56.379482 1736 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:01:56.380172 kubelet[1736]: I1213 02:01:56.380019 1736 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:01:56.428211 kubelet[1736]: E1213 02:01:56.428161 1736 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.4:6443: connect: connection refused Dec 13 02:01:56.430255 kubelet[1736]: I1213 02:01:56.430199 1736 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:01:56.454803 kubelet[1736]: I1213 02:01:56.454757 1736 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:01:56.455155 kubelet[1736]: I1213 02:01:56.455128 1736 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:01:56.455449 kubelet[1736]: I1213 02:01:56.455422 1736 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:01:56.456885 kubelet[1736]: I1213 02:01:56.456834 1736 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:01:56.456885 kubelet[1736]: I1213 02:01:56.456874 1736 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:01:56.457050 kubelet[1736]: I1213 02:01:56.457041 1736 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:01:56.457208 kubelet[1736]: I1213 02:01:56.457190 1736 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:01:56.457294 kubelet[1736]: I1213 02:01:56.457218 1736 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:01:56.457294 kubelet[1736]: I1213 02:01:56.457255 1736 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:01:56.457294 kubelet[1736]: I1213 02:01:56.457279 1736 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:01:56.459930 kubelet[1736]: W1213 02:01:56.459719 1736 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Dec 13 02:01:56.459930 kubelet[1736]: E1213 02:01:56.459809 1736 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Dec 13 02:01:56.460118 kubelet[1736]: W1213 02:01:56.459950 1736 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Dec 13 02:01:56.460118 kubelet[1736]: E1213 02:01:56.460028 1736 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Dec 13 02:01:56.460237 kubelet[1736]: I1213 02:01:56.460199 1736 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:01:56.477523 kubelet[1736]: I1213 02:01:56.477490 1736 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:01:56.479816 kubelet[1736]: W1213 02:01:56.479760 1736 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:01:56.480619 kubelet[1736]: I1213 02:01:56.480570 1736 server.go:1256] "Started kubelet" Dec 13 02:01:56.490960 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:01:56.492002 kubelet[1736]: I1213 02:01:56.491142 1736 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:01:56.498722 kubelet[1736]: E1213 02:01:56.498688 1736 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.4:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal.18109a1f77b3292b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal,UID:ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 02:01:56.480534827 +0000 UTC m=+0.533421006,LastTimestamp:2024-12-13 02:01:56.480534827 +0000 UTC m=+0.533421006,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal,}" Dec 13 02:01:56.500762 kubelet[1736]: I1213 02:01:56.500728 1736 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:01:56.502247 kubelet[1736]: I1213 02:01:56.502220 1736 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:01:56.503586 kubelet[1736]: E1213 02:01:56.503568 1736 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:01:56.503953 kubelet[1736]: I1213 02:01:56.503936 1736 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:01:56.504315 kubelet[1736]: I1213 02:01:56.504294 1736 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:01:56.504463 kubelet[1736]: I1213 02:01:56.503978 1736 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:01:56.508665 kubelet[1736]: I1213 02:01:56.504001 1736 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:01:56.508901 kubelet[1736]: I1213 02:01:56.508881 1736 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:01:56.510538 kubelet[1736]: I1213 02:01:56.510512 1736 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:01:56.510848 kubelet[1736]: I1213 02:01:56.510821 1736 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:01:56.511662 kubelet[1736]: E1213 02:01:56.511638 1736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.4:6443: connect: connection refused" interval="200ms" Dec 13 02:01:56.512978 kubelet[1736]: W1213 02:01:56.512920 1736 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Dec 13 02:01:56.513150 kubelet[1736]: E1213 02:01:56.513128 1736 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Dec 13 02:01:56.514248 kubelet[1736]: I1213 02:01:56.514207 1736 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:01:56.533337 kubelet[1736]: I1213 02:01:56.533304 1736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:01:56.539732 kubelet[1736]: I1213 02:01:56.539696 1736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:01:56.540035 kubelet[1736]: I1213 02:01:56.540012 1736 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:01:56.540141 kubelet[1736]: I1213 02:01:56.540054 1736 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:01:56.540198 kubelet[1736]: E1213 02:01:56.540142 1736 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:01:56.542086 kubelet[1736]: W1213 02:01:56.542048 1736 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Dec 13 02:01:56.542229 kubelet[1736]: E1213 02:01:56.542107 1736 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Dec 13 02:01:56.544362 kubelet[1736]: I1213 02:01:56.544336 1736 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:01:56.544484 kubelet[1736]: I1213 02:01:56.544369 1736 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:01:56.544484 kubelet[1736]: I1213 02:01:56.544412 1736 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:01:56.547520 kubelet[1736]: I1213 02:01:56.547496 1736 policy_none.go:49] "None policy: Start" Dec 13 02:01:56.549356 kubelet[1736]: I1213 02:01:56.549332 1736 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:01:56.549476 kubelet[1736]: I1213 02:01:56.549369 1736 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:01:56.558959 systemd[1]: Created slice kubepods.slice. Dec 13 02:01:56.566455 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 02:01:56.570875 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 02:01:56.576860 kubelet[1736]: I1213 02:01:56.576821 1736 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:01:56.577169 kubelet[1736]: I1213 02:01:56.577145 1736 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:01:56.580492 kubelet[1736]: E1213 02:01:56.580302 1736 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" not found" Dec 13 02:01:56.610554 kubelet[1736]: I1213 02:01:56.610503 1736 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:56.611042 kubelet[1736]: E1213 02:01:56.610996 1736 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.4:6443/api/v1/nodes\": dial tcp 10.128.0.4:6443: connect: connection refused" node="ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:56.640896 kubelet[1736]: I1213 02:01:56.640300 1736 topology_manager.go:215] "Topology Admit Handler" podUID="783f839588b85d4c9c0eddea0bebfa13" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:56.653014 kubelet[1736]: I1213 02:01:56.652969 1736 topology_manager.go:215] "Topology Admit Handler" podUID="4ae37f738edd9528f836c7ceb679f820" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:56.659717 kubelet[1736]: I1213 02:01:56.659676 1736 topology_manager.go:215] "Topology Admit Handler" podUID="fa17cfd47b0a1c94d6ae682d188d6540" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:56.666917 systemd[1]: Created slice kubepods-burstable-pod783f839588b85d4c9c0eddea0bebfa13.slice. Dec 13 02:01:56.689711 systemd[1]: Created slice kubepods-burstable-pod4ae37f738edd9528f836c7ceb679f820.slice. Dec 13 02:01:56.696724 systemd[1]: Created slice kubepods-burstable-podfa17cfd47b0a1c94d6ae682d188d6540.slice. Dec 13 02:01:56.710236 kubelet[1736]: I1213 02:01:56.710196 1736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ae37f738edd9528f836c7ceb679f820-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" (UID: \"4ae37f738edd9528f836c7ceb679f820\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:56.710560 kubelet[1736]: I1213 02:01:56.710530 1736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ae37f738edd9528f836c7ceb679f820-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" (UID: \"4ae37f738edd9528f836c7ceb679f820\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:56.710718 kubelet[1736]: I1213 02:01:56.710609 1736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ae37f738edd9528f836c7ceb679f820-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" (UID: \"4ae37f738edd9528f836c7ceb679f820\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:56.710718 kubelet[1736]: I1213 02:01:56.710663 1736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/783f839588b85d4c9c0eddea0bebfa13-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" (UID: \"783f839588b85d4c9c0eddea0bebfa13\") " pod="kube-system/kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:56.710718 kubelet[1736]: I1213 02:01:56.710710 1736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/783f839588b85d4c9c0eddea0bebfa13-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" (UID: \"783f839588b85d4c9c0eddea0bebfa13\") " pod="kube-system/kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:56.710894 kubelet[1736]: I1213 02:01:56.710800 1736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ae37f738edd9528f836c7ceb679f820-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" (UID: \"4ae37f738edd9528f836c7ceb679f820\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:56.710894 kubelet[1736]: I1213 02:01:56.710849 1736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/783f839588b85d4c9c0eddea0bebfa13-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" (UID: \"783f839588b85d4c9c0eddea0bebfa13\") " pod="kube-system/kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:56.710894 kubelet[1736]: I1213 02:01:56.710889 1736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4ae37f738edd9528f836c7ceb679f820-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" (UID: \"4ae37f738edd9528f836c7ceb679f820\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:56.711070 kubelet[1736]: I1213 02:01:56.710926 1736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa17cfd47b0a1c94d6ae682d188d6540-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" (UID: \"fa17cfd47b0a1c94d6ae682d188d6540\") " pod="kube-system/kube-scheduler-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:56.712754 kubelet[1736]: E1213 02:01:56.712709 1736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.4:6443: connect: connection refused" interval="400ms" Dec 13 02:01:56.820706 kubelet[1736]: I1213 02:01:56.820658 1736 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:56.821147 kubelet[1736]: E1213 02:01:56.821119 1736 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.4:6443/api/v1/nodes\": dial tcp 10.128.0.4:6443: connect: connection refused" node="ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:56.986458 env[1219]: time="2024-12-13T02:01:56.986397920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal,Uid:783f839588b85d4c9c0eddea0bebfa13,Namespace:kube-system,Attempt:0,}" Dec 13 02:01:56.995415 env[1219]: time="2024-12-13T02:01:56.995246966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal,Uid:4ae37f738edd9528f836c7ceb679f820,Namespace:kube-system,Attempt:0,}" Dec 13 02:01:56.999842 env[1219]: time="2024-12-13T02:01:56.999788202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal,Uid:fa17cfd47b0a1c94d6ae682d188d6540,Namespace:kube-system,Attempt:0,}" Dec 13 02:01:57.114003 kubelet[1736]: E1213 02:01:57.113961 1736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.4:6443: connect: connection refused" interval="800ms" Dec 13 02:01:57.227217 kubelet[1736]: I1213 02:01:57.227178 1736 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:57.227701 kubelet[1736]: E1213 02:01:57.227659 1736 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.4:6443/api/v1/nodes\": dial tcp 10.128.0.4:6443: connect: connection refused" node="ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:57.408045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount341557052.mount: Deactivated successfully. Dec 13 02:01:57.423777 env[1219]: time="2024-12-13T02:01:57.423712910Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:57.425678 env[1219]: time="2024-12-13T02:01:57.425621703Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:57.431116 env[1219]: time="2024-12-13T02:01:57.431048524Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:57.432457 env[1219]: time="2024-12-13T02:01:57.432396713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:57.434112 env[1219]: time="2024-12-13T02:01:57.434070791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:57.437148 env[1219]: time="2024-12-13T02:01:57.437080309Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:57.439231 env[1219]: time="2024-12-13T02:01:57.439172013Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:57.441334 env[1219]: time="2024-12-13T02:01:57.441259608Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:57.443631 env[1219]: time="2024-12-13T02:01:57.443568403Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:57.445103 env[1219]: time="2024-12-13T02:01:57.445056983Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:57.449733 env[1219]: time="2024-12-13T02:01:57.449676552Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:57.459640 env[1219]: time="2024-12-13T02:01:57.459546306Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:57.482822 env[1219]: time="2024-12-13T02:01:57.482702143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:01:57.482822 env[1219]: time="2024-12-13T02:01:57.482762708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:01:57.482822 env[1219]: time="2024-12-13T02:01:57.482780729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:01:57.483507 env[1219]: time="2024-12-13T02:01:57.483442411Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/70271c406eca70ebb2b6d907f95619849d191d60af8e9b01879f35357913e3da pid=1775 runtime=io.containerd.runc.v2 Dec 13 02:01:57.520821 systemd[1]: Started cri-containerd-70271c406eca70ebb2b6d907f95619849d191d60af8e9b01879f35357913e3da.scope. Dec 13 02:01:57.526274 kubelet[1736]: W1213 02:01:57.525739 1736 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Dec 13 02:01:57.526274 kubelet[1736]: E1213 02:01:57.525829 1736 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Dec 13 02:01:57.550692 env[1219]: time="2024-12-13T02:01:57.547107613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:01:57.550692 env[1219]: time="2024-12-13T02:01:57.547224812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:01:57.550692 env[1219]: time="2024-12-13T02:01:57.547285930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:01:57.550692 env[1219]: time="2024-12-13T02:01:57.547545173Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fcae408ecbc604c3781ec0bb899bf3f2812f5ca38a8aca50e62706f496096f7c pid=1803 runtime=io.containerd.runc.v2 Dec 13 02:01:57.571378 env[1219]: time="2024-12-13T02:01:57.571250704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:01:57.571378 env[1219]: time="2024-12-13T02:01:57.571317721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:01:57.571726 env[1219]: time="2024-12-13T02:01:57.571335513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:01:57.571726 env[1219]: time="2024-12-13T02:01:57.571535071Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/05411b7eaea8daad75f343ee5048dd33a71214a732a219322f81c86277bfa9cf pid=1827 runtime=io.containerd.runc.v2 Dec 13 02:01:57.585499 systemd[1]: Started cri-containerd-fcae408ecbc604c3781ec0bb899bf3f2812f5ca38a8aca50e62706f496096f7c.scope. Dec 13 02:01:57.611228 systemd[1]: Started cri-containerd-05411b7eaea8daad75f343ee5048dd33a71214a732a219322f81c86277bfa9cf.scope. Dec 13 02:01:57.640203 env[1219]: time="2024-12-13T02:01:57.640145485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal,Uid:4ae37f738edd9528f836c7ceb679f820,Namespace:kube-system,Attempt:0,} returns sandbox id \"70271c406eca70ebb2b6d907f95619849d191d60af8e9b01879f35357913e3da\"" Dec 13 02:01:57.646620 kubelet[1736]: E1213 02:01:57.646190 1736 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flat" Dec 13 02:01:57.651967 env[1219]: time="2024-12-13T02:01:57.651917390Z" level=info msg="CreateContainer within sandbox \"70271c406eca70ebb2b6d907f95619849d191d60af8e9b01879f35357913e3da\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:01:57.677842 env[1219]: time="2024-12-13T02:01:57.677668455Z" level=info msg="CreateContainer within sandbox \"70271c406eca70ebb2b6d907f95619849d191d60af8e9b01879f35357913e3da\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d1f356fd7a0750cc32bdaddd40670ad51c7517d9fe180f203eadd18b2da9a1fb\"" Dec 13 02:01:57.679063 env[1219]: time="2024-12-13T02:01:57.679022894Z" level=info msg="StartContainer for \"d1f356fd7a0750cc32bdaddd40670ad51c7517d9fe180f203eadd18b2da9a1fb\"" Dec 13 02:01:57.705973 kubelet[1736]: W1213 02:01:57.705900 1736 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Dec 13 02:01:57.705973 kubelet[1736]: E1213 02:01:57.705984 1736 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Dec 13 02:01:57.714319 env[1219]: time="2024-12-13T02:01:57.714248427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal,Uid:783f839588b85d4c9c0eddea0bebfa13,Namespace:kube-system,Attempt:0,} returns sandbox id \"05411b7eaea8daad75f343ee5048dd33a71214a732a219322f81c86277bfa9cf\"" Dec 13 02:01:57.718930 kubelet[1736]: E1213 02:01:57.718888 1736 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-21291" Dec 13 02:01:57.722005 env[1219]: time="2024-12-13T02:01:57.721947314Z" level=info msg="CreateContainer within sandbox \"05411b7eaea8daad75f343ee5048dd33a71214a732a219322f81c86277bfa9cf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:01:57.735824 systemd[1]: Started cri-containerd-d1f356fd7a0750cc32bdaddd40670ad51c7517d9fe180f203eadd18b2da9a1fb.scope. Dec 13 02:01:57.755815 kubelet[1736]: W1213 02:01:57.755762 1736 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Dec 13 02:01:57.756012 kubelet[1736]: E1213 02:01:57.755837 1736 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Dec 13 02:01:57.767771 env[1219]: time="2024-12-13T02:01:57.767715540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal,Uid:fa17cfd47b0a1c94d6ae682d188d6540,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcae408ecbc604c3781ec0bb899bf3f2812f5ca38a8aca50e62706f496096f7c\"" Dec 13 02:01:57.774418 kubelet[1736]: E1213 02:01:57.774383 1736 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-21291" Dec 13 02:01:57.778467 env[1219]: time="2024-12-13T02:01:57.778402374Z" level=info msg="CreateContainer within sandbox \"fcae408ecbc604c3781ec0bb899bf3f2812f5ca38a8aca50e62706f496096f7c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:01:57.787627 env[1219]: time="2024-12-13T02:01:57.783376968Z" level=info msg="CreateContainer within sandbox \"05411b7eaea8daad75f343ee5048dd33a71214a732a219322f81c86277bfa9cf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"92a67bfeb85a88b0919798125fcdfc3dd6fa3d538248b0e1e325c2ac91879318\"" Dec 13 02:01:57.787627 env[1219]: time="2024-12-13T02:01:57.784366293Z" level=info msg="StartContainer for \"92a67bfeb85a88b0919798125fcdfc3dd6fa3d538248b0e1e325c2ac91879318\"" Dec 13 02:01:57.811197 env[1219]: time="2024-12-13T02:01:57.811138331Z" level=info msg="CreateContainer within sandbox \"fcae408ecbc604c3781ec0bb899bf3f2812f5ca38a8aca50e62706f496096f7c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"93ae2b2539b506f6dfdf6f7ea6b50be3152f45e70c15cf645c28a24610e86410\"" Dec 13 02:01:57.812330 env[1219]: time="2024-12-13T02:01:57.812278449Z" level=info msg="StartContainer for \"93ae2b2539b506f6dfdf6f7ea6b50be3152f45e70c15cf645c28a24610e86410\"" Dec 13 02:01:57.819541 systemd[1]: Started cri-containerd-92a67bfeb85a88b0919798125fcdfc3dd6fa3d538248b0e1e325c2ac91879318.scope. Dec 13 02:01:57.865325 systemd[1]: Started cri-containerd-93ae2b2539b506f6dfdf6f7ea6b50be3152f45e70c15cf645c28a24610e86410.scope. Dec 13 02:01:57.872761 env[1219]: time="2024-12-13T02:01:57.872696122Z" level=info msg="StartContainer for \"d1f356fd7a0750cc32bdaddd40670ad51c7517d9fe180f203eadd18b2da9a1fb\" returns successfully" Dec 13 02:01:57.915162 kubelet[1736]: E1213 02:01:57.915114 1736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.4:6443: connect: connection refused" interval="1.6s" Dec 13 02:01:57.955311 env[1219]: time="2024-12-13T02:01:57.955245685Z" level=info msg="StartContainer for \"92a67bfeb85a88b0919798125fcdfc3dd6fa3d538248b0e1e325c2ac91879318\" returns successfully" Dec 13 02:01:58.006568 env[1219]: time="2024-12-13T02:01:58.006494310Z" level=info msg="StartContainer for \"93ae2b2539b506f6dfdf6f7ea6b50be3152f45e70c15cf645c28a24610e86410\" returns successfully" Dec 13 02:01:58.027609 kubelet[1736]: W1213 02:01:58.027505 1736 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Dec 13 02:01:58.027835 kubelet[1736]: E1213 02:01:58.027624 1736 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.4:6443: connect: connection refused Dec 13 02:01:58.037336 kubelet[1736]: I1213 02:01:58.037299 1736 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:58.037853 kubelet[1736]: E1213 02:01:58.037827 1736 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.4:6443/api/v1/nodes\": dial tcp 10.128.0.4:6443: connect: connection refused" node="ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:01:58.403320 systemd[1]: run-containerd-runc-k8s.io-70271c406eca70ebb2b6d907f95619849d191d60af8e9b01879f35357913e3da-runc.TO5snL.mount: Deactivated successfully. Dec 13 02:01:59.646536 kubelet[1736]: I1213 02:01:59.646500 1736 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:00.822488 kubelet[1736]: E1213 02:02:00.822440 1736 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" not found" node="ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:00.877257 kubelet[1736]: I1213 02:02:00.877212 1736 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:01.462444 kubelet[1736]: I1213 02:02:01.462400 1736 apiserver.go:52] "Watching apiserver" Dec 13 02:02:01.510057 kubelet[1736]: I1213 02:02:01.510010 1736 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:02:03.606458 systemd[1]: Reloading. Dec 13 02:02:03.726420 /usr/lib/systemd/system-generators/torcx-generator[2030]: time="2024-12-13T02:02:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:02:03.726467 /usr/lib/systemd/system-generators/torcx-generator[2030]: time="2024-12-13T02:02:03Z" level=info msg="torcx already run" Dec 13 02:02:03.831246 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:02:03.831277 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:02:03.856550 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:02:04.046181 kubelet[1736]: I1213 02:02:04.046120 1736 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:02:04.046942 systemd[1]: Stopping kubelet.service... Dec 13 02:02:04.059701 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:02:04.060237 systemd[1]: Stopped kubelet.service. Dec 13 02:02:04.066266 systemd[1]: Starting kubelet.service... Dec 13 02:02:04.445147 systemd[1]: Started kubelet.service. Dec 13 02:02:04.574042 kubelet[2078]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:02:04.574042 kubelet[2078]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:02:04.574042 kubelet[2078]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:02:04.574042 kubelet[2078]: I1213 02:02:04.567720 2078 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:02:04.594590 kubelet[2078]: I1213 02:02:04.594544 2078 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:02:04.595138 kubelet[2078]: I1213 02:02:04.595112 2078 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:02:04.595715 kubelet[2078]: I1213 02:02:04.595687 2078 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:02:04.598918 kubelet[2078]: I1213 02:02:04.598887 2078 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:02:04.602764 kubelet[2078]: I1213 02:02:04.602723 2078 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:02:04.614538 sudo[2090]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 02:02:04.615742 sudo[2090]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 02:02:04.622691 kubelet[2078]: I1213 02:02:04.620965 2078 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:02:04.622691 kubelet[2078]: I1213 02:02:04.621466 2078 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:02:04.622691 kubelet[2078]: I1213 02:02:04.621843 2078 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:02:04.622691 kubelet[2078]: I1213 02:02:04.621887 2078 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:02:04.622691 kubelet[2078]: I1213 02:02:04.621905 2078 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:02:04.622691 kubelet[2078]: I1213 02:02:04.621954 2078 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:02:04.623217 kubelet[2078]: I1213 02:02:04.622137 2078 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:02:04.623217 kubelet[2078]: I1213 02:02:04.622161 2078 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:02:04.623217 kubelet[2078]: I1213 02:02:04.622198 2078 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:02:04.623217 kubelet[2078]: I1213 02:02:04.622220 2078 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:02:04.641575 kubelet[2078]: I1213 02:02:04.641534 2078 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:02:04.642267 kubelet[2078]: I1213 02:02:04.642233 2078 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:02:04.643164 kubelet[2078]: I1213 02:02:04.643137 2078 server.go:1256] "Started kubelet" Dec 13 02:02:04.655207 kubelet[2078]: I1213 02:02:04.655167 2078 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:02:04.667789 kubelet[2078]: I1213 02:02:04.667745 2078 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:02:04.670812 kubelet[2078]: I1213 02:02:04.670775 2078 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:02:04.681122 kubelet[2078]: I1213 02:02:04.681078 2078 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:02:04.683209 kubelet[2078]: I1213 02:02:04.683174 2078 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:02:04.703277 kubelet[2078]: E1213 02:02:04.703235 2078 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:02:04.705783 kubelet[2078]: I1213 02:02:04.705746 2078 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:02:04.706013 kubelet[2078]: I1213 02:02:04.705985 2078 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:02:04.706219 kubelet[2078]: I1213 02:02:04.706200 2078 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:02:04.716138 kubelet[2078]: I1213 02:02:04.716006 2078 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:02:04.716138 kubelet[2078]: I1213 02:02:04.716045 2078 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:02:04.716390 kubelet[2078]: I1213 02:02:04.716177 2078 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:02:04.743839 kubelet[2078]: I1213 02:02:04.743798 2078 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:02:04.776466 kubelet[2078]: I1213 02:02:04.776427 2078 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:02:04.776674 kubelet[2078]: I1213 02:02:04.776646 2078 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:02:04.776769 kubelet[2078]: I1213 02:02:04.776681 2078 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:02:04.777941 kubelet[2078]: E1213 02:02:04.777908 2078 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:02:04.849464 kubelet[2078]: I1213 02:02:04.849172 2078 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:02:04.849464 kubelet[2078]: I1213 02:02:04.849204 2078 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:02:04.849464 kubelet[2078]: I1213 02:02:04.849234 2078 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:02:04.849464 kubelet[2078]: I1213 02:02:04.849477 2078 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:02:04.849924 kubelet[2078]: I1213 02:02:04.849507 2078 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:02:04.849924 kubelet[2078]: I1213 02:02:04.849518 2078 policy_none.go:49] "None policy: Start" Dec 13 02:02:04.851082 kubelet[2078]: I1213 02:02:04.850676 2078 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:02:04.851082 kubelet[2078]: I1213 02:02:04.850715 2078 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:02:04.851082 kubelet[2078]: I1213 02:02:04.850963 2078 state_mem.go:75] "Updated machine memory state" Dec 13 02:02:04.859378 kubelet[2078]: I1213 02:02:04.859056 2078 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:02:04.865482 kubelet[2078]: I1213 02:02:04.865001 2078 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:02:04.878982 kubelet[2078]: I1213 02:02:04.878949 2078 topology_manager.go:215] "Topology Admit Handler" podUID="783f839588b85d4c9c0eddea0bebfa13" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:04.879319 kubelet[2078]: I1213 02:02:04.879300 2078 topology_manager.go:215] "Topology Admit Handler" podUID="4ae37f738edd9528f836c7ceb679f820" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:04.879509 kubelet[2078]: I1213 02:02:04.879483 2078 topology_manager.go:215] "Topology Admit Handler" podUID="fa17cfd47b0a1c94d6ae682d188d6540" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:04.901559 kubelet[2078]: W1213 02:02:04.901444 2078 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 02:02:04.902104 kubelet[2078]: W1213 02:02:04.902076 2078 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 02:02:04.911973 kubelet[2078]: W1213 02:02:04.911937 2078 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 02:02:04.996048 kubelet[2078]: I1213 02:02:04.995926 2078 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:05.009052 kubelet[2078]: I1213 02:02:05.008990 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/783f839588b85d4c9c0eddea0bebfa13-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" (UID: \"783f839588b85d4c9c0eddea0bebfa13\") " pod="kube-system/kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:05.009256 kubelet[2078]: I1213 02:02:05.009091 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ae37f738edd9528f836c7ceb679f820-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" (UID: \"4ae37f738edd9528f836c7ceb679f820\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:05.009256 kubelet[2078]: I1213 02:02:05.009148 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4ae37f738edd9528f836c7ceb679f820-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" (UID: \"4ae37f738edd9528f836c7ceb679f820\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:05.009256 kubelet[2078]: I1213 02:02:05.009185 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa17cfd47b0a1c94d6ae682d188d6540-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" (UID: \"fa17cfd47b0a1c94d6ae682d188d6540\") " pod="kube-system/kube-scheduler-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:05.009256 kubelet[2078]: I1213 02:02:05.009238 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/783f839588b85d4c9c0eddea0bebfa13-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" (UID: \"783f839588b85d4c9c0eddea0bebfa13\") " pod="kube-system/kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:05.010198 kubelet[2078]: I1213 02:02:05.009293 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/783f839588b85d4c9c0eddea0bebfa13-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" (UID: \"783f839588b85d4c9c0eddea0bebfa13\") " pod="kube-system/kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:05.010198 kubelet[2078]: I1213 02:02:05.009335 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ae37f738edd9528f836c7ceb679f820-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" (UID: \"4ae37f738edd9528f836c7ceb679f820\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:05.010198 kubelet[2078]: I1213 02:02:05.009396 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ae37f738edd9528f836c7ceb679f820-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" (UID: \"4ae37f738edd9528f836c7ceb679f820\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:05.010198 kubelet[2078]: I1213 02:02:05.009469 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ae37f738edd9528f836c7ceb679f820-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" (UID: \"4ae37f738edd9528f836c7ceb679f820\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:05.012558 kubelet[2078]: I1213 02:02:05.012011 2078 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:05.012558 kubelet[2078]: I1213 02:02:05.012131 2078 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:05.505922 sudo[2090]: pam_unix(sudo:session): session closed for user root Dec 13 02:02:05.637650 kubelet[2078]: I1213 02:02:05.637570 2078 apiserver.go:52] "Watching apiserver" Dec 13 02:02:05.706305 kubelet[2078]: I1213 02:02:05.706257 2078 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:02:05.825556 kubelet[2078]: W1213 02:02:05.825419 2078 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 02:02:05.825763 kubelet[2078]: E1213 02:02:05.825570 2078 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" Dec 13 02:02:05.846409 kubelet[2078]: I1213 02:02:05.846345 2078 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" podStartSLOduration=1.846286458 podStartE2EDuration="1.846286458s" podCreationTimestamp="2024-12-13 02:02:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:02:05.843893532 +0000 UTC m=+1.391391212" watchObservedRunningTime="2024-12-13 02:02:05.846286458 +0000 UTC m=+1.393784124" Dec 13 02:02:05.872983 kubelet[2078]: I1213 02:02:05.872939 2078 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" podStartSLOduration=1.872867477 podStartE2EDuration="1.872867477s" podCreationTimestamp="2024-12-13 02:02:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:02:05.857685981 +0000 UTC m=+1.405183657" watchObservedRunningTime="2024-12-13 02:02:05.872867477 +0000 UTC m=+1.420365156" Dec 13 02:02:07.505139 sudo[1389]: pam_unix(sudo:session): session closed for user root Dec 13 02:02:07.548829 sshd[1378]: pam_unix(sshd:session): session closed for user core Dec 13 02:02:07.553470 systemd[1]: sshd@4-10.128.0.4:22-139.178.68.195:45620.service: Deactivated successfully. Dec 13 02:02:07.554688 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:02:07.554912 systemd[1]: session-5.scope: Consumed 6.692s CPU time. Dec 13 02:02:07.555717 systemd-logind[1208]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:02:07.557108 systemd-logind[1208]: Removed session 5. Dec 13 02:02:09.779242 update_engine[1211]: I1213 02:02:09.779163 1211 update_attempter.cc:509] Updating boot flags... Dec 13 02:02:14.366098 kubelet[2078]: I1213 02:02:14.366036 2078 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal" podStartSLOduration=10.365980243 podStartE2EDuration="10.365980243s" podCreationTimestamp="2024-12-13 02:02:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:02:05.87440501 +0000 UTC m=+1.421902686" watchObservedRunningTime="2024-12-13 02:02:14.365980243 +0000 UTC m=+9.913477921" Dec 13 02:02:17.170214 kubelet[2078]: I1213 02:02:17.170170 2078 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:02:17.171149 env[1219]: time="2024-12-13T02:02:17.171074224Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:02:17.171672 kubelet[2078]: I1213 02:02:17.171459 2078 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:02:17.890953 kubelet[2078]: I1213 02:02:17.890899 2078 topology_manager.go:215] "Topology Admit Handler" podUID="d1465d5c-86e7-4e6c-97ae-6b6a142093fe" podNamespace="kube-system" podName="kube-proxy-pfd4d" Dec 13 02:02:17.895327 kubelet[2078]: I1213 02:02:17.895289 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1465d5c-86e7-4e6c-97ae-6b6a142093fe-xtables-lock\") pod \"kube-proxy-pfd4d\" (UID: \"d1465d5c-86e7-4e6c-97ae-6b6a142093fe\") " pod="kube-system/kube-proxy-pfd4d" Dec 13 02:02:17.895624 kubelet[2078]: I1213 02:02:17.895577 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1465d5c-86e7-4e6c-97ae-6b6a142093fe-lib-modules\") pod \"kube-proxy-pfd4d\" (UID: \"d1465d5c-86e7-4e6c-97ae-6b6a142093fe\") " pod="kube-system/kube-proxy-pfd4d" Dec 13 02:02:17.895837 kubelet[2078]: I1213 02:02:17.895809 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d1465d5c-86e7-4e6c-97ae-6b6a142093fe-kube-proxy\") pod \"kube-proxy-pfd4d\" (UID: \"d1465d5c-86e7-4e6c-97ae-6b6a142093fe\") " pod="kube-system/kube-proxy-pfd4d" Dec 13 02:02:17.896029 kubelet[2078]: I1213 02:02:17.895999 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnv8n\" (UniqueName: \"kubernetes.io/projected/d1465d5c-86e7-4e6c-97ae-6b6a142093fe-kube-api-access-bnv8n\") pod \"kube-proxy-pfd4d\" (UID: \"d1465d5c-86e7-4e6c-97ae-6b6a142093fe\") " pod="kube-system/kube-proxy-pfd4d" Dec 13 02:02:17.901559 kubelet[2078]: I1213 02:02:17.901502 2078 topology_manager.go:215] "Topology Admit Handler" podUID="70294008-4610-4a1d-bdba-35e7e738842a" podNamespace="kube-system" podName="cilium-xc8dq" Dec 13 02:02:17.913088 systemd[1]: Created slice kubepods-besteffort-podd1465d5c_86e7_4e6c_97ae_6b6a142093fe.slice. Dec 13 02:02:17.924180 systemd[1]: Created slice kubepods-burstable-pod70294008_4610_4a1d_bdba_35e7e738842a.slice. Dec 13 02:02:17.996708 kubelet[2078]: I1213 02:02:17.996661 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-bpf-maps\") pod \"cilium-xc8dq\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " pod="kube-system/cilium-xc8dq" Dec 13 02:02:17.997029 kubelet[2078]: I1213 02:02:17.997000 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70294008-4610-4a1d-bdba-35e7e738842a-cilium-config-path\") pod \"cilium-xc8dq\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " pod="kube-system/cilium-xc8dq" Dec 13 02:02:17.997333 kubelet[2078]: I1213 02:02:17.997311 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-etc-cni-netd\") pod \"cilium-xc8dq\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " pod="kube-system/cilium-xc8dq" Dec 13 02:02:17.997490 kubelet[2078]: I1213 02:02:17.997473 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-host-proc-sys-net\") pod \"cilium-xc8dq\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " pod="kube-system/cilium-xc8dq" Dec 13 02:02:17.997662 kubelet[2078]: I1213 02:02:17.997644 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-hostproc\") pod \"cilium-xc8dq\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " pod="kube-system/cilium-xc8dq" Dec 13 02:02:17.997815 kubelet[2078]: I1213 02:02:17.997800 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-xtables-lock\") pod \"cilium-xc8dq\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " pod="kube-system/cilium-xc8dq" Dec 13 02:02:17.997949 kubelet[2078]: I1213 02:02:17.997928 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70294008-4610-4a1d-bdba-35e7e738842a-hubble-tls\") pod \"cilium-xc8dq\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " pod="kube-system/cilium-xc8dq" Dec 13 02:02:17.998094 kubelet[2078]: I1213 02:02:17.998077 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70294008-4610-4a1d-bdba-35e7e738842a-clustermesh-secrets\") pod \"cilium-xc8dq\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " pod="kube-system/cilium-xc8dq" Dec 13 02:02:17.998236 kubelet[2078]: I1213 02:02:17.998219 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-cilium-cgroup\") pod \"cilium-xc8dq\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " pod="kube-system/cilium-xc8dq" Dec 13 02:02:17.998366 kubelet[2078]: I1213 02:02:17.998351 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-lib-modules\") pod \"cilium-xc8dq\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " pod="kube-system/cilium-xc8dq" Dec 13 02:02:17.998498 kubelet[2078]: I1213 02:02:17.998479 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-host-proc-sys-kernel\") pod \"cilium-xc8dq\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " pod="kube-system/cilium-xc8dq" Dec 13 02:02:17.998681 kubelet[2078]: I1213 02:02:17.998664 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txwvb\" (UniqueName: \"kubernetes.io/projected/70294008-4610-4a1d-bdba-35e7e738842a-kube-api-access-txwvb\") pod \"cilium-xc8dq\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " pod="kube-system/cilium-xc8dq" Dec 13 02:02:17.998868 kubelet[2078]: I1213 02:02:17.998851 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-cilium-run\") pod \"cilium-xc8dq\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " pod="kube-system/cilium-xc8dq" Dec 13 02:02:17.999002 kubelet[2078]: I1213 02:02:17.998987 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-cni-path\") pod \"cilium-xc8dq\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " pod="kube-system/cilium-xc8dq" Dec 13 02:02:18.008065 kubelet[2078]: E1213 02:02:18.008021 2078 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 02:02:18.008299 kubelet[2078]: E1213 02:02:18.008279 2078 projected.go:200] Error preparing data for projected volume kube-api-access-bnv8n for pod kube-system/kube-proxy-pfd4d: configmap "kube-root-ca.crt" not found Dec 13 02:02:18.008501 kubelet[2078]: E1213 02:02:18.008484 2078 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1465d5c-86e7-4e6c-97ae-6b6a142093fe-kube-api-access-bnv8n podName:d1465d5c-86e7-4e6c-97ae-6b6a142093fe nodeName:}" failed. No retries permitted until 2024-12-13 02:02:18.508449265 +0000 UTC m=+14.055946940 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bnv8n" (UniqueName: "kubernetes.io/projected/d1465d5c-86e7-4e6c-97ae-6b6a142093fe-kube-api-access-bnv8n") pod "kube-proxy-pfd4d" (UID: "d1465d5c-86e7-4e6c-97ae-6b6a142093fe") : configmap "kube-root-ca.crt" not found Dec 13 02:02:18.213077 kubelet[2078]: I1213 02:02:18.212996 2078 topology_manager.go:215] "Topology Admit Handler" podUID="fec2f200-7466-47f6-8105-b3792d78219d" podNamespace="kube-system" podName="cilium-operator-5cc964979-pn845" Dec 13 02:02:18.220974 systemd[1]: Created slice kubepods-besteffort-podfec2f200_7466_47f6_8105_b3792d78219d.slice. Dec 13 02:02:18.231039 env[1219]: time="2024-12-13T02:02:18.230311393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xc8dq,Uid:70294008-4610-4a1d-bdba-35e7e738842a,Namespace:kube-system,Attempt:0,}" Dec 13 02:02:18.263458 env[1219]: time="2024-12-13T02:02:18.263345563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:02:18.263458 env[1219]: time="2024-12-13T02:02:18.263408452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:02:18.263800 env[1219]: time="2024-12-13T02:02:18.263428333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:02:18.264678 env[1219]: time="2024-12-13T02:02:18.264069403Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794 pid=2175 runtime=io.containerd.runc.v2 Dec 13 02:02:18.288866 systemd[1]: Started cri-containerd-6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794.scope. Dec 13 02:02:18.309705 kubelet[2078]: I1213 02:02:18.309371 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fec2f200-7466-47f6-8105-b3792d78219d-cilium-config-path\") pod \"cilium-operator-5cc964979-pn845\" (UID: \"fec2f200-7466-47f6-8105-b3792d78219d\") " pod="kube-system/cilium-operator-5cc964979-pn845" Dec 13 02:02:18.309705 kubelet[2078]: I1213 02:02:18.309546 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9njs\" (UniqueName: \"kubernetes.io/projected/fec2f200-7466-47f6-8105-b3792d78219d-kube-api-access-c9njs\") pod \"cilium-operator-5cc964979-pn845\" (UID: \"fec2f200-7466-47f6-8105-b3792d78219d\") " pod="kube-system/cilium-operator-5cc964979-pn845" Dec 13 02:02:18.333863 env[1219]: time="2024-12-13T02:02:18.333811077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xc8dq,Uid:70294008-4610-4a1d-bdba-35e7e738842a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794\"" Dec 13 02:02:18.336848 env[1219]: time="2024-12-13T02:02:18.336808203Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:02:18.523922 env[1219]: time="2024-12-13T02:02:18.523343245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pfd4d,Uid:d1465d5c-86e7-4e6c-97ae-6b6a142093fe,Namespace:kube-system,Attempt:0,}" Dec 13 02:02:18.525177 env[1219]: time="2024-12-13T02:02:18.525130949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-pn845,Uid:fec2f200-7466-47f6-8105-b3792d78219d,Namespace:kube-system,Attempt:0,}" Dec 13 02:02:18.557925 env[1219]: time="2024-12-13T02:02:18.557686118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:02:18.557925 env[1219]: time="2024-12-13T02:02:18.557724535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:02:18.557925 env[1219]: time="2024-12-13T02:02:18.557744316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:02:18.557925 env[1219]: time="2024-12-13T02:02:18.557654731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:02:18.557925 env[1219]: time="2024-12-13T02:02:18.557724613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:02:18.557925 env[1219]: time="2024-12-13T02:02:18.557744291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:02:18.558411 env[1219]: time="2024-12-13T02:02:18.558007526Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e7f921815d955c329f391e4fec49903b53d273f202c496e49f94da1f0211dff pid=2223 runtime=io.containerd.runc.v2 Dec 13 02:02:18.559381 env[1219]: time="2024-12-13T02:02:18.559266876Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3a15bb3938e403eee7b2a7eb0df9bbb68e3e764f2f2e7b46bd2bdcc5914046b pid=2227 runtime=io.containerd.runc.v2 Dec 13 02:02:18.579943 systemd[1]: Started cri-containerd-5e7f921815d955c329f391e4fec49903b53d273f202c496e49f94da1f0211dff.scope. Dec 13 02:02:18.597004 systemd[1]: Started cri-containerd-d3a15bb3938e403eee7b2a7eb0df9bbb68e3e764f2f2e7b46bd2bdcc5914046b.scope. Dec 13 02:02:18.644342 env[1219]: time="2024-12-13T02:02:18.644287602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pfd4d,Uid:d1465d5c-86e7-4e6c-97ae-6b6a142093fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e7f921815d955c329f391e4fec49903b53d273f202c496e49f94da1f0211dff\"" Dec 13 02:02:18.650776 env[1219]: time="2024-12-13T02:02:18.650728250Z" level=info msg="CreateContainer within sandbox \"5e7f921815d955c329f391e4fec49903b53d273f202c496e49f94da1f0211dff\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:02:18.682144 env[1219]: time="2024-12-13T02:02:18.682076226Z" level=info msg="CreateContainer within sandbox \"5e7f921815d955c329f391e4fec49903b53d273f202c496e49f94da1f0211dff\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"80cfe5f0e1af02d6b444e335e005525f21acc8b35c30d99c0710e9527e53e982\"" Dec 13 02:02:18.689091 env[1219]: time="2024-12-13T02:02:18.688095959Z" level=info msg="StartContainer for \"80cfe5f0e1af02d6b444e335e005525f21acc8b35c30d99c0710e9527e53e982\"" Dec 13 02:02:18.702991 env[1219]: time="2024-12-13T02:02:18.702942396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-pn845,Uid:fec2f200-7466-47f6-8105-b3792d78219d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3a15bb3938e403eee7b2a7eb0df9bbb68e3e764f2f2e7b46bd2bdcc5914046b\"" Dec 13 02:02:18.723402 systemd[1]: Started cri-containerd-80cfe5f0e1af02d6b444e335e005525f21acc8b35c30d99c0710e9527e53e982.scope. Dec 13 02:02:18.768762 env[1219]: time="2024-12-13T02:02:18.767776620Z" level=info msg="StartContainer for \"80cfe5f0e1af02d6b444e335e005525f21acc8b35c30d99c0710e9527e53e982\" returns successfully" Dec 13 02:02:26.372872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1164926779.mount: Deactivated successfully. Dec 13 02:02:29.833923 env[1219]: time="2024-12-13T02:02:29.833853290Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:02:29.836743 env[1219]: time="2024-12-13T02:02:29.836667106Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:02:29.839547 env[1219]: time="2024-12-13T02:02:29.839499932Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:02:29.840557 env[1219]: time="2024-12-13T02:02:29.840491982Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:02:29.844506 env[1219]: time="2024-12-13T02:02:29.844465015Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:02:29.846527 env[1219]: time="2024-12-13T02:02:29.846485837Z" level=info msg="CreateContainer within sandbox \"6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:02:29.876904 env[1219]: time="2024-12-13T02:02:29.876832865Z" level=info msg="CreateContainer within sandbox \"6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7\"" Dec 13 02:02:29.879218 env[1219]: time="2024-12-13T02:02:29.878090438Z" level=info msg="StartContainer for \"49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7\"" Dec 13 02:02:29.922150 systemd[1]: run-containerd-runc-k8s.io-49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7-runc.7r8NJY.mount: Deactivated successfully. Dec 13 02:02:29.927833 systemd[1]: Started cri-containerd-49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7.scope. Dec 13 02:02:29.972852 env[1219]: time="2024-12-13T02:02:29.972790210Z" level=info msg="StartContainer for \"49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7\" returns successfully" Dec 13 02:02:29.982379 systemd[1]: cri-containerd-49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7.scope: Deactivated successfully. Dec 13 02:02:30.860302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7-rootfs.mount: Deactivated successfully. Dec 13 02:02:30.924039 kubelet[2078]: I1213 02:02:30.923986 2078 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pfd4d" podStartSLOduration=13.923928403 podStartE2EDuration="13.923928403s" podCreationTimestamp="2024-12-13 02:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:02:18.856947343 +0000 UTC m=+14.404445023" watchObservedRunningTime="2024-12-13 02:02:30.923928403 +0000 UTC m=+26.471426080" Dec 13 02:02:31.815503 env[1219]: time="2024-12-13T02:02:31.815437003Z" level=info msg="shim disconnected" id=49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7 Dec 13 02:02:31.815503 env[1219]: time="2024-12-13T02:02:31.815505466Z" level=warning msg="cleaning up after shim disconnected" id=49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7 namespace=k8s.io Dec 13 02:02:31.816213 env[1219]: time="2024-12-13T02:02:31.815521169Z" level=info msg="cleaning up dead shim" Dec 13 02:02:31.829761 env[1219]: time="2024-12-13T02:02:31.829692724Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2501 runtime=io.containerd.runc.v2\n" Dec 13 02:02:31.914162 env[1219]: time="2024-12-13T02:02:31.914099043Z" level=info msg="CreateContainer within sandbox \"6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:02:31.944946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1429417874.mount: Deactivated successfully. Dec 13 02:02:31.954176 env[1219]: time="2024-12-13T02:02:31.951839380Z" level=info msg="CreateContainer within sandbox \"6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb\"" Dec 13 02:02:31.954176 env[1219]: time="2024-12-13T02:02:31.952899178Z" level=info msg="StartContainer for \"8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb\"" Dec 13 02:02:32.003852 systemd[1]: Started cri-containerd-8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb.scope. Dec 13 02:02:32.050071 env[1219]: time="2024-12-13T02:02:32.049905452Z" level=info msg="StartContainer for \"8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb\" returns successfully" Dec 13 02:02:32.065897 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:02:32.068220 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:02:32.068826 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:02:32.075113 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:02:32.080734 systemd[1]: cri-containerd-8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb.scope: Deactivated successfully. Dec 13 02:02:32.095203 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:02:32.130241 env[1219]: time="2024-12-13T02:02:32.130175198Z" level=info msg="shim disconnected" id=8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb Dec 13 02:02:32.130623 env[1219]: time="2024-12-13T02:02:32.130551223Z" level=warning msg="cleaning up after shim disconnected" id=8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb namespace=k8s.io Dec 13 02:02:32.130623 env[1219]: time="2024-12-13T02:02:32.130580755Z" level=info msg="cleaning up dead shim" Dec 13 02:02:32.147020 env[1219]: time="2024-12-13T02:02:32.146962029Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2565 runtime=io.containerd.runc.v2\n" Dec 13 02:02:32.919908 env[1219]: time="2024-12-13T02:02:32.919500505Z" level=info msg="CreateContainer within sandbox \"6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:02:32.934335 systemd[1]: run-containerd-runc-k8s.io-8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb-runc.VFSN43.mount: Deactivated successfully. Dec 13 02:02:32.934650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb-rootfs.mount: Deactivated successfully. Dec 13 02:02:32.951638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount65767005.mount: Deactivated successfully. Dec 13 02:02:32.965846 env[1219]: time="2024-12-13T02:02:32.965789102Z" level=info msg="CreateContainer within sandbox \"6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037\"" Dec 13 02:02:32.967032 env[1219]: time="2024-12-13T02:02:32.966923092Z" level=info msg="StartContainer for \"4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037\"" Dec 13 02:02:33.010660 systemd[1]: Started cri-containerd-4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037.scope. Dec 13 02:02:33.101347 env[1219]: time="2024-12-13T02:02:33.101289804Z" level=info msg="StartContainer for \"4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037\" returns successfully" Dec 13 02:02:33.105867 systemd[1]: cri-containerd-4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037.scope: Deactivated successfully. Dec 13 02:02:33.145947 env[1219]: time="2024-12-13T02:02:33.145865189Z" level=info msg="shim disconnected" id=4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037 Dec 13 02:02:33.145947 env[1219]: time="2024-12-13T02:02:33.145925051Z" level=warning msg="cleaning up after shim disconnected" id=4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037 namespace=k8s.io Dec 13 02:02:33.145947 env[1219]: time="2024-12-13T02:02:33.145941751Z" level=info msg="cleaning up dead shim" Dec 13 02:02:33.164792 env[1219]: time="2024-12-13T02:02:33.164398251Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2621 runtime=io.containerd.runc.v2\n" Dec 13 02:02:33.934769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037-rootfs.mount: Deactivated successfully. Dec 13 02:02:33.948443 env[1219]: time="2024-12-13T02:02:33.948391410Z" level=info msg="CreateContainer within sandbox \"6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:02:33.984291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1175076259.mount: Deactivated successfully. Dec 13 02:02:33.989720 env[1219]: time="2024-12-13T02:02:33.989665300Z" level=info msg="CreateContainer within sandbox \"6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680\"" Dec 13 02:02:33.992417 env[1219]: time="2024-12-13T02:02:33.992375394Z" level=info msg="StartContainer for \"34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680\"" Dec 13 02:02:34.045265 systemd[1]: Started cri-containerd-34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680.scope. Dec 13 02:02:34.116081 systemd[1]: cri-containerd-34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680.scope: Deactivated successfully. Dec 13 02:02:34.118470 env[1219]: time="2024-12-13T02:02:34.118160234Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70294008_4610_4a1d_bdba_35e7e738842a.slice/cri-containerd-34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680.scope/memory.events\": no such file or directory" Dec 13 02:02:34.122690 env[1219]: time="2024-12-13T02:02:34.122642226Z" level=info msg="StartContainer for \"34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680\" returns successfully" Dec 13 02:02:34.191043 env[1219]: time="2024-12-13T02:02:34.190888654Z" level=info msg="shim disconnected" id=34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680 Dec 13 02:02:34.191043 env[1219]: time="2024-12-13T02:02:34.190955705Z" level=warning msg="cleaning up after shim disconnected" id=34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680 namespace=k8s.io Dec 13 02:02:34.191043 env[1219]: time="2024-12-13T02:02:34.190969813Z" level=info msg="cleaning up dead shim" Dec 13 02:02:34.210795 env[1219]: time="2024-12-13T02:02:34.210739373Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2680 runtime=io.containerd.runc.v2\n" Dec 13 02:02:34.720743 env[1219]: time="2024-12-13T02:02:34.720673485Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:02:34.723315 env[1219]: time="2024-12-13T02:02:34.723260990Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:02:34.726378 env[1219]: time="2024-12-13T02:02:34.726316785Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:02:34.727242 env[1219]: time="2024-12-13T02:02:34.727157651Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:02:34.731497 env[1219]: time="2024-12-13T02:02:34.731456849Z" level=info msg="CreateContainer within sandbox \"d3a15bb3938e403eee7b2a7eb0df9bbb68e3e764f2f2e7b46bd2bdcc5914046b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:02:34.755302 env[1219]: time="2024-12-13T02:02:34.755220928Z" level=info msg="CreateContainer within sandbox \"d3a15bb3938e403eee7b2a7eb0df9bbb68e3e764f2f2e7b46bd2bdcc5914046b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b856d3ee4324d61dcd7fa991662b81af5620f8353ebe3314047179c8174bffb3\"" Dec 13 02:02:34.758565 env[1219]: time="2024-12-13T02:02:34.758506370Z" level=info msg="StartContainer for \"b856d3ee4324d61dcd7fa991662b81af5620f8353ebe3314047179c8174bffb3\"" Dec 13 02:02:34.791496 systemd[1]: Started cri-containerd-b856d3ee4324d61dcd7fa991662b81af5620f8353ebe3314047179c8174bffb3.scope. Dec 13 02:02:34.840443 env[1219]: time="2024-12-13T02:02:34.840304263Z" level=info msg="StartContainer for \"b856d3ee4324d61dcd7fa991662b81af5620f8353ebe3314047179c8174bffb3\" returns successfully" Dec 13 02:02:34.939196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680-rootfs.mount: Deactivated successfully. Dec 13 02:02:34.948805 env[1219]: time="2024-12-13T02:02:34.948743157Z" level=info msg="CreateContainer within sandbox \"6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:02:34.969822 kubelet[2078]: I1213 02:02:34.964403 2078 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-pn845" podStartSLOduration=0.941214846 podStartE2EDuration="16.964337586s" podCreationTimestamp="2024-12-13 02:02:18 +0000 UTC" firstStartedPulling="2024-12-13 02:02:18.704518006 +0000 UTC m=+14.252015673" lastFinishedPulling="2024-12-13 02:02:34.727640748 +0000 UTC m=+30.275138413" observedRunningTime="2024-12-13 02:02:34.959699366 +0000 UTC m=+30.507197043" watchObservedRunningTime="2024-12-13 02:02:34.964337586 +0000 UTC m=+30.511835262" Dec 13 02:02:34.997936 env[1219]: time="2024-12-13T02:02:34.997769498Z" level=info msg="CreateContainer within sandbox \"6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220\"" Dec 13 02:02:35.007860 env[1219]: time="2024-12-13T02:02:35.007805515Z" level=info msg="StartContainer for \"3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220\"" Dec 13 02:02:35.079089 systemd[1]: Started cri-containerd-3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220.scope. Dec 13 02:02:35.165411 env[1219]: time="2024-12-13T02:02:35.165344602Z" level=info msg="StartContainer for \"3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220\" returns successfully" Dec 13 02:02:35.283722 kubelet[2078]: I1213 02:02:35.283563 2078 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:02:35.326670 kubelet[2078]: I1213 02:02:35.326610 2078 topology_manager.go:215] "Topology Admit Handler" podUID="71b72e06-8b7f-4263-aa16-eb7c0878fd25" podNamespace="kube-system" podName="coredns-76f75df574-52qsk" Dec 13 02:02:35.334723 systemd[1]: Created slice kubepods-burstable-pod71b72e06_8b7f_4263_aa16_eb7c0878fd25.slice. Dec 13 02:02:35.342095 kubelet[2078]: I1213 02:02:35.342060 2078 topology_manager.go:215] "Topology Admit Handler" podUID="a203c4af-7e31-4be7-83a0-775ef35d2c6c" podNamespace="kube-system" podName="coredns-76f75df574-7d7p7" Dec 13 02:02:35.346441 kubelet[2078]: I1213 02:02:35.346406 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a203c4af-7e31-4be7-83a0-775ef35d2c6c-config-volume\") pod \"coredns-76f75df574-7d7p7\" (UID: \"a203c4af-7e31-4be7-83a0-775ef35d2c6c\") " pod="kube-system/coredns-76f75df574-7d7p7" Dec 13 02:02:35.346727 kubelet[2078]: I1213 02:02:35.346707 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxxxw\" (UniqueName: \"kubernetes.io/projected/a203c4af-7e31-4be7-83a0-775ef35d2c6c-kube-api-access-fxxxw\") pod \"coredns-76f75df574-7d7p7\" (UID: \"a203c4af-7e31-4be7-83a0-775ef35d2c6c\") " pod="kube-system/coredns-76f75df574-7d7p7" Dec 13 02:02:35.346924 kubelet[2078]: I1213 02:02:35.346905 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq4bh\" (UniqueName: \"kubernetes.io/projected/71b72e06-8b7f-4263-aa16-eb7c0878fd25-kube-api-access-nq4bh\") pod \"coredns-76f75df574-52qsk\" (UID: \"71b72e06-8b7f-4263-aa16-eb7c0878fd25\") " pod="kube-system/coredns-76f75df574-52qsk" Dec 13 02:02:35.347084 kubelet[2078]: I1213 02:02:35.347068 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71b72e06-8b7f-4263-aa16-eb7c0878fd25-config-volume\") pod \"coredns-76f75df574-52qsk\" (UID: \"71b72e06-8b7f-4263-aa16-eb7c0878fd25\") " pod="kube-system/coredns-76f75df574-52qsk" Dec 13 02:02:35.350713 systemd[1]: Created slice kubepods-burstable-poda203c4af_7e31_4be7_83a0_775ef35d2c6c.slice. Dec 13 02:02:35.649743 env[1219]: time="2024-12-13T02:02:35.649062576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-52qsk,Uid:71b72e06-8b7f-4263-aa16-eb7c0878fd25,Namespace:kube-system,Attempt:0,}" Dec 13 02:02:35.658762 env[1219]: time="2024-12-13T02:02:35.658556368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7d7p7,Uid:a203c4af-7e31-4be7-83a0-775ef35d2c6c,Namespace:kube-system,Attempt:0,}" Dec 13 02:02:35.954896 systemd[1]: run-containerd-runc-k8s.io-3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220-runc.KLIuuh.mount: Deactivated successfully. Dec 13 02:02:39.106576 systemd-networkd[1025]: cilium_host: Link UP Dec 13 02:02:39.108348 systemd-networkd[1025]: cilium_net: Link UP Dec 13 02:02:39.114686 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 02:02:39.115186 systemd-networkd[1025]: cilium_net: Gained carrier Dec 13 02:02:39.122683 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:02:39.122999 systemd-networkd[1025]: cilium_host: Gained carrier Dec 13 02:02:39.124809 systemd-networkd[1025]: cilium_net: Gained IPv6LL Dec 13 02:02:39.274480 systemd-networkd[1025]: cilium_vxlan: Link UP Dec 13 02:02:39.274492 systemd-networkd[1025]: cilium_vxlan: Gained carrier Dec 13 02:02:39.552627 kernel: NET: Registered PF_ALG protocol family Dec 13 02:02:40.034002 systemd-networkd[1025]: cilium_host: Gained IPv6LL Dec 13 02:02:40.412878 systemd-networkd[1025]: lxc_health: Link UP Dec 13 02:02:40.431634 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:02:40.436466 systemd-networkd[1025]: lxc_health: Gained carrier Dec 13 02:02:40.750094 systemd-networkd[1025]: lxc3a58ee66d904: Link UP Dec 13 02:02:40.767853 kernel: eth0: renamed from tmpf79bd Dec 13 02:02:40.787402 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3a58ee66d904: link becomes ready Dec 13 02:02:40.789014 systemd-networkd[1025]: lxc3a58ee66d904: Gained carrier Dec 13 02:02:40.795159 systemd-networkd[1025]: lxcf2a66ab9ebae: Link UP Dec 13 02:02:40.807670 kernel: eth0: renamed from tmpefd7f Dec 13 02:02:40.826637 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf2a66ab9ebae: link becomes ready Dec 13 02:02:40.827213 systemd-networkd[1025]: lxcf2a66ab9ebae: Gained carrier Dec 13 02:02:41.186338 systemd-networkd[1025]: cilium_vxlan: Gained IPv6LL Dec 13 02:02:41.698205 systemd-networkd[1025]: lxc_health: Gained IPv6LL Dec 13 02:02:42.265762 kubelet[2078]: I1213 02:02:42.265711 2078 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-xc8dq" podStartSLOduration=13.76028355 podStartE2EDuration="25.26563941s" podCreationTimestamp="2024-12-13 02:02:17 +0000 UTC" firstStartedPulling="2024-12-13 02:02:18.33596363 +0000 UTC m=+13.883461287" lastFinishedPulling="2024-12-13 02:02:29.841319444 +0000 UTC m=+25.388817147" observedRunningTime="2024-12-13 02:02:36.094215629 +0000 UTC m=+31.641713316" watchObservedRunningTime="2024-12-13 02:02:42.26563941 +0000 UTC m=+37.813137082" Dec 13 02:02:42.593784 systemd-networkd[1025]: lxc3a58ee66d904: Gained IPv6LL Dec 13 02:02:42.594228 systemd-networkd[1025]: lxcf2a66ab9ebae: Gained IPv6LL Dec 13 02:02:45.912637 env[1219]: time="2024-12-13T02:02:45.911533302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:02:45.912637 env[1219]: time="2024-12-13T02:02:45.911676253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:02:45.912637 env[1219]: time="2024-12-13T02:02:45.911730695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:02:45.912637 env[1219]: time="2024-12-13T02:02:45.912224270Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/efd7fbae08834195bb00e3137a5a9fbf41163a01a00a6ffaa7066a69812238b7 pid=3258 runtime=io.containerd.runc.v2 Dec 13 02:02:45.947222 env[1219]: time="2024-12-13T02:02:45.947117882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:02:45.947455 env[1219]: time="2024-12-13T02:02:45.947256249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:02:45.947455 env[1219]: time="2024-12-13T02:02:45.947300104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:02:45.947612 env[1219]: time="2024-12-13T02:02:45.947530055Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f79bd6bde7f3ae46be81b3d8c59abe7cd5c55170e85c015f6581628f9b2648da pid=3278 runtime=io.containerd.runc.v2 Dec 13 02:02:45.984062 systemd[1]: Started cri-containerd-efd7fbae08834195bb00e3137a5a9fbf41163a01a00a6ffaa7066a69812238b7.scope. Dec 13 02:02:45.989694 systemd[1]: run-containerd-runc-k8s.io-efd7fbae08834195bb00e3137a5a9fbf41163a01a00a6ffaa7066a69812238b7-runc.gNlmdF.mount: Deactivated successfully. Dec 13 02:02:46.019277 systemd[1]: Started cri-containerd-f79bd6bde7f3ae46be81b3d8c59abe7cd5c55170e85c015f6581628f9b2648da.scope. Dec 13 02:02:46.108184 env[1219]: time="2024-12-13T02:02:46.108123886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-52qsk,Uid:71b72e06-8b7f-4263-aa16-eb7c0878fd25,Namespace:kube-system,Attempt:0,} returns sandbox id \"f79bd6bde7f3ae46be81b3d8c59abe7cd5c55170e85c015f6581628f9b2648da\"" Dec 13 02:02:46.116936 env[1219]: time="2024-12-13T02:02:46.116883356Z" level=info msg="CreateContainer within sandbox \"f79bd6bde7f3ae46be81b3d8c59abe7cd5c55170e85c015f6581628f9b2648da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:02:46.141517 env[1219]: time="2024-12-13T02:02:46.141449005Z" level=info msg="CreateContainer within sandbox \"f79bd6bde7f3ae46be81b3d8c59abe7cd5c55170e85c015f6581628f9b2648da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e9841fa42f925c2a6ffb5543e15b52216e2a935fdfab6868e273a4ecedfa7d04\"" Dec 13 02:02:46.143003 env[1219]: time="2024-12-13T02:02:46.142936957Z" level=info msg="StartContainer for \"e9841fa42f925c2a6ffb5543e15b52216e2a935fdfab6868e273a4ecedfa7d04\"" Dec 13 02:02:46.172512 env[1219]: time="2024-12-13T02:02:46.171303388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7d7p7,Uid:a203c4af-7e31-4be7-83a0-775ef35d2c6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"efd7fbae08834195bb00e3137a5a9fbf41163a01a00a6ffaa7066a69812238b7\"" Dec 13 02:02:46.175096 env[1219]: time="2024-12-13T02:02:46.175044014Z" level=info msg="CreateContainer within sandbox \"efd7fbae08834195bb00e3137a5a9fbf41163a01a00a6ffaa7066a69812238b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:02:46.199492 systemd[1]: Started cri-containerd-e9841fa42f925c2a6ffb5543e15b52216e2a935fdfab6868e273a4ecedfa7d04.scope. Dec 13 02:02:46.202181 env[1219]: time="2024-12-13T02:02:46.202129992Z" level=info msg="CreateContainer within sandbox \"efd7fbae08834195bb00e3137a5a9fbf41163a01a00a6ffaa7066a69812238b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"406b8633791d4840e03738efd77eb26a280f1f87252ee74bd447e7ae3b7989f1\"" Dec 13 02:02:46.203409 env[1219]: time="2024-12-13T02:02:46.203372942Z" level=info msg="StartContainer for \"406b8633791d4840e03738efd77eb26a280f1f87252ee74bd447e7ae3b7989f1\"" Dec 13 02:02:46.247927 systemd[1]: Started cri-containerd-406b8633791d4840e03738efd77eb26a280f1f87252ee74bd447e7ae3b7989f1.scope. Dec 13 02:02:46.305554 env[1219]: time="2024-12-13T02:02:46.305503366Z" level=info msg="StartContainer for \"e9841fa42f925c2a6ffb5543e15b52216e2a935fdfab6868e273a4ecedfa7d04\" returns successfully" Dec 13 02:02:46.341569 env[1219]: time="2024-12-13T02:02:46.341507209Z" level=info msg="StartContainer for \"406b8633791d4840e03738efd77eb26a280f1f87252ee74bd447e7ae3b7989f1\" returns successfully" Dec 13 02:02:47.005590 kubelet[2078]: I1213 02:02:47.005550 2078 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-52qsk" podStartSLOduration=29.005498316 podStartE2EDuration="29.005498316s" podCreationTimestamp="2024-12-13 02:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:02:47.005090011 +0000 UTC m=+42.552587702" watchObservedRunningTime="2024-12-13 02:02:47.005498316 +0000 UTC m=+42.552995996" Dec 13 02:02:47.023280 kubelet[2078]: I1213 02:02:47.023240 2078 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-7d7p7" podStartSLOduration=29.02318005 podStartE2EDuration="29.02318005s" podCreationTimestamp="2024-12-13 02:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:02:47.021447618 +0000 UTC m=+42.568945359" watchObservedRunningTime="2024-12-13 02:02:47.02318005 +0000 UTC m=+42.570677728" Dec 13 02:03:06.305624 systemd[1]: Started sshd@5-10.128.0.4:22-139.178.68.195:39050.service. Dec 13 02:03:06.601403 sshd[3426]: Accepted publickey for core from 139.178.68.195 port 39050 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:03:06.603640 sshd[3426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:03:06.610711 systemd-logind[1208]: New session 6 of user core. Dec 13 02:03:06.610712 systemd[1]: Started session-6.scope. Dec 13 02:03:06.904810 sshd[3426]: pam_unix(sshd:session): session closed for user core Dec 13 02:03:06.909575 systemd[1]: sshd@5-10.128.0.4:22-139.178.68.195:39050.service: Deactivated successfully. Dec 13 02:03:06.910695 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:03:06.911767 systemd-logind[1208]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:03:06.913212 systemd-logind[1208]: Removed session 6. Dec 13 02:03:11.952882 systemd[1]: Started sshd@6-10.128.0.4:22-139.178.68.195:39056.service. Dec 13 02:03:12.249245 sshd[3439]: Accepted publickey for core from 139.178.68.195 port 39056 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:03:12.251358 sshd[3439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:03:12.258898 systemd[1]: Started session-7.scope. Dec 13 02:03:12.259792 systemd-logind[1208]: New session 7 of user core. Dec 13 02:03:12.535881 sshd[3439]: pam_unix(sshd:session): session closed for user core Dec 13 02:03:12.540800 systemd[1]: sshd@6-10.128.0.4:22-139.178.68.195:39056.service: Deactivated successfully. Dec 13 02:03:12.542006 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:03:12.543096 systemd-logind[1208]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:03:12.544450 systemd-logind[1208]: Removed session 7. Dec 13 02:03:17.583172 systemd[1]: Started sshd@7-10.128.0.4:22-139.178.68.195:51582.service. Dec 13 02:03:17.877211 sshd[3453]: Accepted publickey for core from 139.178.68.195 port 51582 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:03:17.879487 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:03:17.886792 systemd[1]: Started session-8.scope. Dec 13 02:03:17.888103 systemd-logind[1208]: New session 8 of user core. Dec 13 02:03:18.170311 sshd[3453]: pam_unix(sshd:session): session closed for user core Dec 13 02:03:18.175140 systemd[1]: sshd@7-10.128.0.4:22-139.178.68.195:51582.service: Deactivated successfully. Dec 13 02:03:18.176329 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:03:18.177167 systemd-logind[1208]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:03:18.178707 systemd-logind[1208]: Removed session 8. Dec 13 02:03:23.218546 systemd[1]: Started sshd@8-10.128.0.4:22-139.178.68.195:51588.service. Dec 13 02:03:23.514766 sshd[3469]: Accepted publickey for core from 139.178.68.195 port 51588 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:03:23.516977 sshd[3469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:03:23.524710 systemd[1]: Started session-9.scope. Dec 13 02:03:23.526083 systemd-logind[1208]: New session 9 of user core. Dec 13 02:03:23.824765 sshd[3469]: pam_unix(sshd:session): session closed for user core Dec 13 02:03:23.829642 systemd[1]: sshd@8-10.128.0.4:22-139.178.68.195:51588.service: Deactivated successfully. Dec 13 02:03:23.830767 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:03:23.831683 systemd-logind[1208]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:03:23.833069 systemd-logind[1208]: Removed session 9. Dec 13 02:03:28.875908 systemd[1]: Started sshd@9-10.128.0.4:22-139.178.68.195:45918.service. Dec 13 02:03:29.173269 sshd[3482]: Accepted publickey for core from 139.178.68.195 port 45918 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:03:29.175583 sshd[3482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:03:29.182910 systemd[1]: Started session-10.scope. Dec 13 02:03:29.183552 systemd-logind[1208]: New session 10 of user core. Dec 13 02:03:29.465437 sshd[3482]: pam_unix(sshd:session): session closed for user core Dec 13 02:03:29.470113 systemd-logind[1208]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:03:29.470627 systemd[1]: sshd@9-10.128.0.4:22-139.178.68.195:45918.service: Deactivated successfully. Dec 13 02:03:29.471820 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:03:29.473360 systemd-logind[1208]: Removed session 10. Dec 13 02:03:29.511782 systemd[1]: Started sshd@10-10.128.0.4:22-139.178.68.195:45926.service. Dec 13 02:03:29.809196 sshd[3494]: Accepted publickey for core from 139.178.68.195 port 45926 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:03:29.811055 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:03:29.818803 systemd-logind[1208]: New session 11 of user core. Dec 13 02:03:29.820290 systemd[1]: Started session-11.scope. Dec 13 02:03:30.138923 sshd[3494]: pam_unix(sshd:session): session closed for user core Dec 13 02:03:30.143788 systemd[1]: sshd@10-10.128.0.4:22-139.178.68.195:45926.service: Deactivated successfully. Dec 13 02:03:30.145074 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:03:30.146167 systemd-logind[1208]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:03:30.147543 systemd-logind[1208]: Removed session 11. Dec 13 02:03:30.185398 systemd[1]: Started sshd@11-10.128.0.4:22-139.178.68.195:45940.service. Dec 13 02:03:30.478291 sshd[3503]: Accepted publickey for core from 139.178.68.195 port 45940 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:03:30.480746 sshd[3503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:03:30.487841 systemd[1]: Started session-12.scope. Dec 13 02:03:30.488680 systemd-logind[1208]: New session 12 of user core. Dec 13 02:03:30.765919 sshd[3503]: pam_unix(sshd:session): session closed for user core Dec 13 02:03:30.770498 systemd[1]: sshd@11-10.128.0.4:22-139.178.68.195:45940.service: Deactivated successfully. Dec 13 02:03:30.771734 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:03:30.772811 systemd-logind[1208]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:03:30.774082 systemd-logind[1208]: Removed session 12. Dec 13 02:03:35.812524 systemd[1]: Started sshd@12-10.128.0.4:22-139.178.68.195:45944.service. Dec 13 02:03:36.104233 sshd[3515]: Accepted publickey for core from 139.178.68.195 port 45944 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:03:36.106772 sshd[3515]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:03:36.113680 systemd-logind[1208]: New session 13 of user core. Dec 13 02:03:36.114363 systemd[1]: Started session-13.scope. Dec 13 02:03:36.396955 sshd[3515]: pam_unix(sshd:session): session closed for user core Dec 13 02:03:36.402030 systemd[1]: sshd@12-10.128.0.4:22-139.178.68.195:45944.service: Deactivated successfully. Dec 13 02:03:36.403232 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:03:36.404778 systemd-logind[1208]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:03:36.406106 systemd-logind[1208]: Removed session 13. Dec 13 02:03:41.445628 systemd[1]: Started sshd@13-10.128.0.4:22-139.178.68.195:55280.service. Dec 13 02:03:41.743407 sshd[3526]: Accepted publickey for core from 139.178.68.195 port 55280 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:03:41.745868 sshd[3526]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:03:41.753828 systemd[1]: Started session-14.scope. Dec 13 02:03:41.754862 systemd-logind[1208]: New session 14 of user core. Dec 13 02:03:42.032528 sshd[3526]: pam_unix(sshd:session): session closed for user core Dec 13 02:03:42.037156 systemd[1]: sshd@13-10.128.0.4:22-139.178.68.195:55280.service: Deactivated successfully. Dec 13 02:03:42.038394 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:03:42.039400 systemd-logind[1208]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:03:42.040764 systemd-logind[1208]: Removed session 14. Dec 13 02:03:42.080178 systemd[1]: Started sshd@14-10.128.0.4:22-139.178.68.195:55284.service. Dec 13 02:03:42.376108 sshd[3538]: Accepted publickey for core from 139.178.68.195 port 55284 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:03:42.377947 sshd[3538]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:03:42.384900 systemd[1]: Started session-15.scope. Dec 13 02:03:42.386034 systemd-logind[1208]: New session 15 of user core. Dec 13 02:03:42.740899 sshd[3538]: pam_unix(sshd:session): session closed for user core Dec 13 02:03:42.745351 systemd[1]: sshd@14-10.128.0.4:22-139.178.68.195:55284.service: Deactivated successfully. Dec 13 02:03:42.746556 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:03:42.747463 systemd-logind[1208]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:03:42.748705 systemd-logind[1208]: Removed session 15. Dec 13 02:03:42.788007 systemd[1]: Started sshd@15-10.128.0.4:22-139.178.68.195:55300.service. Dec 13 02:03:43.082464 sshd[3547]: Accepted publickey for core from 139.178.68.195 port 55300 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:03:43.084155 sshd[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:03:43.091375 systemd[1]: Started session-16.scope. Dec 13 02:03:43.092639 systemd-logind[1208]: New session 16 of user core. Dec 13 02:03:44.869010 sshd[3547]: pam_unix(sshd:session): session closed for user core Dec 13 02:03:44.874858 systemd-logind[1208]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:03:44.877747 systemd[1]: sshd@15-10.128.0.4:22-139.178.68.195:55300.service: Deactivated successfully. Dec 13 02:03:44.878895 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:03:44.882482 systemd-logind[1208]: Removed session 16. Dec 13 02:03:44.912441 systemd[1]: Started sshd@16-10.128.0.4:22-139.178.68.195:55312.service. Dec 13 02:03:45.205127 sshd[3565]: Accepted publickey for core from 139.178.68.195 port 55312 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:03:45.207185 sshd[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:03:45.214545 systemd[1]: Started session-17.scope. Dec 13 02:03:45.215769 systemd-logind[1208]: New session 17 of user core. Dec 13 02:03:45.639764 sshd[3565]: pam_unix(sshd:session): session closed for user core Dec 13 02:03:45.644534 systemd[1]: sshd@16-10.128.0.4:22-139.178.68.195:55312.service: Deactivated successfully. Dec 13 02:03:45.645732 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:03:45.646669 systemd-logind[1208]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:03:45.648258 systemd-logind[1208]: Removed session 17. Dec 13 02:03:45.686497 systemd[1]: Started sshd@17-10.128.0.4:22-139.178.68.195:55322.service. Dec 13 02:03:45.982878 sshd[3575]: Accepted publickey for core from 139.178.68.195 port 55322 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:03:45.984893 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:03:45.992007 systemd[1]: Started session-18.scope. Dec 13 02:03:45.992668 systemd-logind[1208]: New session 18 of user core. Dec 13 02:03:46.268828 sshd[3575]: pam_unix(sshd:session): session closed for user core Dec 13 02:03:46.273788 systemd[1]: sshd@17-10.128.0.4:22-139.178.68.195:55322.service: Deactivated successfully. Dec 13 02:03:46.274993 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:03:46.275932 systemd-logind[1208]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:03:46.277234 systemd-logind[1208]: Removed session 18. Dec 13 02:03:51.315752 systemd[1]: Started sshd@18-10.128.0.4:22-139.178.68.195:37884.service. Dec 13 02:03:51.608100 sshd[3589]: Accepted publickey for core from 139.178.68.195 port 37884 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:03:51.610498 sshd[3589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:03:51.618855 systemd[1]: Started session-19.scope. Dec 13 02:03:51.620049 systemd-logind[1208]: New session 19 of user core. Dec 13 02:03:51.891683 sshd[3589]: pam_unix(sshd:session): session closed for user core Dec 13 02:03:51.896775 systemd-logind[1208]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:03:51.899009 systemd[1]: sshd@18-10.128.0.4:22-139.178.68.195:37884.service: Deactivated successfully. Dec 13 02:03:51.900226 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:03:51.902501 systemd-logind[1208]: Removed session 19. Dec 13 02:03:56.938394 systemd[1]: Started sshd@19-10.128.0.4:22-139.178.68.195:39316.service. Dec 13 02:03:57.228484 sshd[3605]: Accepted publickey for core from 139.178.68.195 port 39316 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:03:57.230679 sshd[3605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:03:57.237505 systemd[1]: Started session-20.scope. Dec 13 02:03:57.238378 systemd-logind[1208]: New session 20 of user core. Dec 13 02:03:57.511515 sshd[3605]: pam_unix(sshd:session): session closed for user core Dec 13 02:03:57.518466 systemd-logind[1208]: Session 20 logged out. Waiting for processes to exit. Dec 13 02:03:57.518577 systemd[1]: sshd@19-10.128.0.4:22-139.178.68.195:39316.service: Deactivated successfully. Dec 13 02:03:57.519856 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 02:03:57.521261 systemd-logind[1208]: Removed session 20. Dec 13 02:04:02.558907 systemd[1]: Started sshd@20-10.128.0.4:22-139.178.68.195:39326.service. Dec 13 02:04:02.852486 sshd[3619]: Accepted publickey for core from 139.178.68.195 port 39326 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:04:02.854680 sshd[3619]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:04:02.862184 systemd[1]: Started session-21.scope. Dec 13 02:04:02.863092 systemd-logind[1208]: New session 21 of user core. Dec 13 02:04:03.182173 sshd[3619]: pam_unix(sshd:session): session closed for user core Dec 13 02:04:03.187061 systemd[1]: sshd@20-10.128.0.4:22-139.178.68.195:39326.service: Deactivated successfully. Dec 13 02:04:03.188223 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 02:04:03.189269 systemd-logind[1208]: Session 21 logged out. Waiting for processes to exit. Dec 13 02:04:03.190584 systemd-logind[1208]: Removed session 21. Dec 13 02:04:03.229133 systemd[1]: Started sshd@21-10.128.0.4:22-139.178.68.195:39342.service. Dec 13 02:04:03.522926 sshd[3631]: Accepted publickey for core from 139.178.68.195 port 39342 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:04:03.524940 sshd[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:04:03.531925 systemd[1]: Started session-22.scope. Dec 13 02:04:03.532924 systemd-logind[1208]: New session 22 of user core. Dec 13 02:04:05.014080 env[1219]: time="2024-12-13T02:04:05.014019448Z" level=info msg="StopContainer for \"b856d3ee4324d61dcd7fa991662b81af5620f8353ebe3314047179c8174bffb3\" with timeout 30 (s)" Dec 13 02:04:05.015446 env[1219]: time="2024-12-13T02:04:05.015396979Z" level=info msg="Stop container \"b856d3ee4324d61dcd7fa991662b81af5620f8353ebe3314047179c8174bffb3\" with signal terminated" Dec 13 02:04:05.034971 systemd[1]: run-containerd-runc-k8s.io-3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220-runc.gjY9px.mount: Deactivated successfully. Dec 13 02:04:05.049685 systemd[1]: cri-containerd-b856d3ee4324d61dcd7fa991662b81af5620f8353ebe3314047179c8174bffb3.scope: Deactivated successfully. Dec 13 02:04:05.070654 env[1219]: time="2024-12-13T02:04:05.070553608Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:04:05.080689 env[1219]: time="2024-12-13T02:04:05.080589242Z" level=info msg="StopContainer for \"3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220\" with timeout 2 (s)" Dec 13 02:04:05.082062 env[1219]: time="2024-12-13T02:04:05.082013237Z" level=info msg="Stop container \"3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220\" with signal terminated" Dec 13 02:04:05.092737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b856d3ee4324d61dcd7fa991662b81af5620f8353ebe3314047179c8174bffb3-rootfs.mount: Deactivated successfully. Dec 13 02:04:05.103326 systemd-networkd[1025]: lxc_health: Link DOWN Dec 13 02:04:05.103348 systemd-networkd[1025]: lxc_health: Lost carrier Dec 13 02:04:05.130191 systemd[1]: cri-containerd-3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220.scope: Deactivated successfully. Dec 13 02:04:05.130550 systemd[1]: cri-containerd-3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220.scope: Consumed 9.432s CPU time. Dec 13 02:04:05.141002 env[1219]: time="2024-12-13T02:04:05.140940727Z" level=info msg="shim disconnected" id=b856d3ee4324d61dcd7fa991662b81af5620f8353ebe3314047179c8174bffb3 Dec 13 02:04:05.142431 env[1219]: time="2024-12-13T02:04:05.142385004Z" level=warning msg="cleaning up after shim disconnected" id=b856d3ee4324d61dcd7fa991662b81af5620f8353ebe3314047179c8174bffb3 namespace=k8s.io Dec 13 02:04:05.142722 env[1219]: time="2024-12-13T02:04:05.142693313Z" level=info msg="cleaning up dead shim" Dec 13 02:04:05.162799 env[1219]: time="2024-12-13T02:04:05.162746390Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:04:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3690 runtime=io.containerd.runc.v2\n" Dec 13 02:04:05.166217 env[1219]: time="2024-12-13T02:04:05.166168605Z" level=info msg="StopContainer for \"b856d3ee4324d61dcd7fa991662b81af5620f8353ebe3314047179c8174bffb3\" returns successfully" Dec 13 02:04:05.167556 env[1219]: time="2024-12-13T02:04:05.167514927Z" level=info msg="StopPodSandbox for \"d3a15bb3938e403eee7b2a7eb0df9bbb68e3e764f2f2e7b46bd2bdcc5914046b\"" Dec 13 02:04:05.167722 env[1219]: time="2024-12-13T02:04:05.167624009Z" level=info msg="Container to stop \"b856d3ee4324d61dcd7fa991662b81af5620f8353ebe3314047179c8174bffb3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:04:05.171093 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d3a15bb3938e403eee7b2a7eb0df9bbb68e3e764f2f2e7b46bd2bdcc5914046b-shm.mount: Deactivated successfully. Dec 13 02:04:05.188714 systemd[1]: cri-containerd-d3a15bb3938e403eee7b2a7eb0df9bbb68e3e764f2f2e7b46bd2bdcc5914046b.scope: Deactivated successfully. Dec 13 02:04:05.192422 env[1219]: time="2024-12-13T02:04:05.192361447Z" level=info msg="shim disconnected" id=3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220 Dec 13 02:04:05.192812 env[1219]: time="2024-12-13T02:04:05.192777096Z" level=warning msg="cleaning up after shim disconnected" id=3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220 namespace=k8s.io Dec 13 02:04:05.193014 env[1219]: time="2024-12-13T02:04:05.192980858Z" level=info msg="cleaning up dead shim" Dec 13 02:04:05.209922 env[1219]: time="2024-12-13T02:04:05.209868850Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:04:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3717 runtime=io.containerd.runc.v2\n" Dec 13 02:04:05.212980 env[1219]: time="2024-12-13T02:04:05.212930360Z" level=info msg="StopContainer for \"3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220\" returns successfully" Dec 13 02:04:05.213897 env[1219]: time="2024-12-13T02:04:05.213858124Z" level=info msg="StopPodSandbox for \"6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794\"" Dec 13 02:04:05.214182 env[1219]: time="2024-12-13T02:04:05.214149016Z" level=info msg="Container to stop \"49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:04:05.214352 env[1219]: time="2024-12-13T02:04:05.214322529Z" level=info msg="Container to stop \"8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:04:05.214533 env[1219]: time="2024-12-13T02:04:05.214501846Z" level=info msg="Container to stop \"4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:04:05.214962 env[1219]: time="2024-12-13T02:04:05.214914698Z" level=info msg="Container to stop \"34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:04:05.215166 env[1219]: time="2024-12-13T02:04:05.215136247Z" level=info msg="Container to stop \"3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:04:05.224931 systemd[1]: cri-containerd-6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794.scope: Deactivated successfully. Dec 13 02:04:05.236119 env[1219]: time="2024-12-13T02:04:05.236047559Z" level=info msg="shim disconnected" id=d3a15bb3938e403eee7b2a7eb0df9bbb68e3e764f2f2e7b46bd2bdcc5914046b Dec 13 02:04:05.236119 env[1219]: time="2024-12-13T02:04:05.236109912Z" level=warning msg="cleaning up after shim disconnected" id=d3a15bb3938e403eee7b2a7eb0df9bbb68e3e764f2f2e7b46bd2bdcc5914046b namespace=k8s.io Dec 13 02:04:05.236119 env[1219]: time="2024-12-13T02:04:05.236125147Z" level=info msg="cleaning up dead shim" Dec 13 02:04:05.256339 env[1219]: time="2024-12-13T02:04:05.256280116Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:04:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3751 runtime=io.containerd.runc.v2\n" Dec 13 02:04:05.256835 env[1219]: time="2024-12-13T02:04:05.256789261Z" level=info msg="TearDown network for sandbox \"d3a15bb3938e403eee7b2a7eb0df9bbb68e3e764f2f2e7b46bd2bdcc5914046b\" successfully" Dec 13 02:04:05.256835 env[1219]: time="2024-12-13T02:04:05.256833050Z" level=info msg="StopPodSandbox for \"d3a15bb3938e403eee7b2a7eb0df9bbb68e3e764f2f2e7b46bd2bdcc5914046b\" returns successfully" Dec 13 02:04:05.277441 env[1219]: time="2024-12-13T02:04:05.275165788Z" level=info msg="shim disconnected" id=6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794 Dec 13 02:04:05.277441 env[1219]: time="2024-12-13T02:04:05.275234894Z" level=warning msg="cleaning up after shim disconnected" id=6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794 namespace=k8s.io Dec 13 02:04:05.277441 env[1219]: time="2024-12-13T02:04:05.275250342Z" level=info msg="cleaning up dead shim" Dec 13 02:04:05.291067 env[1219]: time="2024-12-13T02:04:05.290988586Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:04:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3777 runtime=io.containerd.runc.v2\n" Dec 13 02:04:05.291485 env[1219]: time="2024-12-13T02:04:05.291427560Z" level=info msg="TearDown network for sandbox \"6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794\" successfully" Dec 13 02:04:05.291485 env[1219]: time="2024-12-13T02:04:05.291467733Z" level=info msg="StopPodSandbox for \"6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794\" returns successfully" Dec 13 02:04:05.413541 kubelet[2078]: I1213 02:04:05.413470 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-host-proc-sys-kernel\") pod \"70294008-4610-4a1d-bdba-35e7e738842a\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " Dec 13 02:04:05.413541 kubelet[2078]: I1213 02:04:05.413544 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70294008-4610-4a1d-bdba-35e7e738842a-cilium-config-path\") pod \"70294008-4610-4a1d-bdba-35e7e738842a\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " Dec 13 02:04:05.414392 kubelet[2078]: I1213 02:04:05.413574 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-hostproc\") pod \"70294008-4610-4a1d-bdba-35e7e738842a\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " Dec 13 02:04:05.414392 kubelet[2078]: I1213 02:04:05.413633 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fec2f200-7466-47f6-8105-b3792d78219d-cilium-config-path\") pod \"fec2f200-7466-47f6-8105-b3792d78219d\" (UID: \"fec2f200-7466-47f6-8105-b3792d78219d\") " Dec 13 02:04:05.414392 kubelet[2078]: I1213 02:04:05.413663 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-cni-path\") pod \"70294008-4610-4a1d-bdba-35e7e738842a\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " Dec 13 02:04:05.414392 kubelet[2078]: I1213 02:04:05.413693 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-xtables-lock\") pod \"70294008-4610-4a1d-bdba-35e7e738842a\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " Dec 13 02:04:05.414392 kubelet[2078]: I1213 02:04:05.413725 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70294008-4610-4a1d-bdba-35e7e738842a-clustermesh-secrets\") pod \"70294008-4610-4a1d-bdba-35e7e738842a\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " Dec 13 02:04:05.414392 kubelet[2078]: I1213 02:04:05.413754 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-cilium-cgroup\") pod \"70294008-4610-4a1d-bdba-35e7e738842a\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " Dec 13 02:04:05.414815 kubelet[2078]: I1213 02:04:05.413788 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txwvb\" (UniqueName: \"kubernetes.io/projected/70294008-4610-4a1d-bdba-35e7e738842a-kube-api-access-txwvb\") pod \"70294008-4610-4a1d-bdba-35e7e738842a\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " Dec 13 02:04:05.414815 kubelet[2078]: I1213 02:04:05.413820 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9njs\" (UniqueName: \"kubernetes.io/projected/fec2f200-7466-47f6-8105-b3792d78219d-kube-api-access-c9njs\") pod \"fec2f200-7466-47f6-8105-b3792d78219d\" (UID: \"fec2f200-7466-47f6-8105-b3792d78219d\") " Dec 13 02:04:05.414815 kubelet[2078]: I1213 02:04:05.413853 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-bpf-maps\") pod \"70294008-4610-4a1d-bdba-35e7e738842a\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " Dec 13 02:04:05.414815 kubelet[2078]: I1213 02:04:05.413887 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-etc-cni-netd\") pod \"70294008-4610-4a1d-bdba-35e7e738842a\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " Dec 13 02:04:05.414815 kubelet[2078]: I1213 02:04:05.413916 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-host-proc-sys-net\") pod \"70294008-4610-4a1d-bdba-35e7e738842a\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " Dec 13 02:04:05.414815 kubelet[2078]: I1213 02:04:05.413948 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-cilium-run\") pod \"70294008-4610-4a1d-bdba-35e7e738842a\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " Dec 13 02:04:05.415145 kubelet[2078]: I1213 02:04:05.413985 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70294008-4610-4a1d-bdba-35e7e738842a-hubble-tls\") pod \"70294008-4610-4a1d-bdba-35e7e738842a\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " Dec 13 02:04:05.415145 kubelet[2078]: I1213 02:04:05.414017 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-lib-modules\") pod \"70294008-4610-4a1d-bdba-35e7e738842a\" (UID: \"70294008-4610-4a1d-bdba-35e7e738842a\") " Dec 13 02:04:05.415145 kubelet[2078]: I1213 02:04:05.414125 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "70294008-4610-4a1d-bdba-35e7e738842a" (UID: "70294008-4610-4a1d-bdba-35e7e738842a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:05.415145 kubelet[2078]: I1213 02:04:05.414178 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "70294008-4610-4a1d-bdba-35e7e738842a" (UID: "70294008-4610-4a1d-bdba-35e7e738842a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:05.417794 kubelet[2078]: I1213 02:04:05.417739 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70294008-4610-4a1d-bdba-35e7e738842a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "70294008-4610-4a1d-bdba-35e7e738842a" (UID: "70294008-4610-4a1d-bdba-35e7e738842a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:04:05.420637 kubelet[2078]: I1213 02:04:05.420562 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70294008-4610-4a1d-bdba-35e7e738842a-kube-api-access-txwvb" (OuterVolumeSpecName: "kube-api-access-txwvb") pod "70294008-4610-4a1d-bdba-35e7e738842a" (UID: "70294008-4610-4a1d-bdba-35e7e738842a"). InnerVolumeSpecName "kube-api-access-txwvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:04:05.420909 kubelet[2078]: I1213 02:04:05.420879 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-hostproc" (OuterVolumeSpecName: "hostproc") pod "70294008-4610-4a1d-bdba-35e7e738842a" (UID: "70294008-4610-4a1d-bdba-35e7e738842a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:05.422301 kubelet[2078]: I1213 02:04:05.422263 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fec2f200-7466-47f6-8105-b3792d78219d-kube-api-access-c9njs" (OuterVolumeSpecName: "kube-api-access-c9njs") pod "fec2f200-7466-47f6-8105-b3792d78219d" (UID: "fec2f200-7466-47f6-8105-b3792d78219d"). InnerVolumeSpecName "kube-api-access-c9njs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:04:05.422418 kubelet[2078]: I1213 02:04:05.422321 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "70294008-4610-4a1d-bdba-35e7e738842a" (UID: "70294008-4610-4a1d-bdba-35e7e738842a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:05.422418 kubelet[2078]: I1213 02:04:05.422358 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "70294008-4610-4a1d-bdba-35e7e738842a" (UID: "70294008-4610-4a1d-bdba-35e7e738842a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:05.422418 kubelet[2078]: I1213 02:04:05.422387 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "70294008-4610-4a1d-bdba-35e7e738842a" (UID: "70294008-4610-4a1d-bdba-35e7e738842a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:05.422640 kubelet[2078]: I1213 02:04:05.422416 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "70294008-4610-4a1d-bdba-35e7e738842a" (UID: "70294008-4610-4a1d-bdba-35e7e738842a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:05.424887 kubelet[2078]: I1213 02:04:05.424854 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fec2f200-7466-47f6-8105-b3792d78219d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fec2f200-7466-47f6-8105-b3792d78219d" (UID: "fec2f200-7466-47f6-8105-b3792d78219d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:04:05.425093 kubelet[2078]: I1213 02:04:05.425069 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-cni-path" (OuterVolumeSpecName: "cni-path") pod "70294008-4610-4a1d-bdba-35e7e738842a" (UID: "70294008-4610-4a1d-bdba-35e7e738842a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:05.425236 kubelet[2078]: I1213 02:04:05.425214 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "70294008-4610-4a1d-bdba-35e7e738842a" (UID: "70294008-4610-4a1d-bdba-35e7e738842a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:05.426567 kubelet[2078]: I1213 02:04:05.426436 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70294008-4610-4a1d-bdba-35e7e738842a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "70294008-4610-4a1d-bdba-35e7e738842a" (UID: "70294008-4610-4a1d-bdba-35e7e738842a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:04:05.426567 kubelet[2078]: I1213 02:04:05.426494 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "70294008-4610-4a1d-bdba-35e7e738842a" (UID: "70294008-4610-4a1d-bdba-35e7e738842a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:05.429760 kubelet[2078]: I1213 02:04:05.429715 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70294008-4610-4a1d-bdba-35e7e738842a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "70294008-4610-4a1d-bdba-35e7e738842a" (UID: "70294008-4610-4a1d-bdba-35e7e738842a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:04:05.515234 kubelet[2078]: I1213 02:04:05.515173 2078 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-bpf-maps\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:05.515234 kubelet[2078]: I1213 02:04:05.515220 2078 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-cilium-run\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:05.515234 kubelet[2078]: I1213 02:04:05.515241 2078 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-etc-cni-netd\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:05.515575 kubelet[2078]: I1213 02:04:05.515261 2078 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-host-proc-sys-net\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:05.515575 kubelet[2078]: I1213 02:04:05.515279 2078 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70294008-4610-4a1d-bdba-35e7e738842a-hubble-tls\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:05.515575 kubelet[2078]: I1213 02:04:05.515295 2078 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-lib-modules\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:05.515575 kubelet[2078]: I1213 02:04:05.515313 2078 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-host-proc-sys-kernel\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:05.515575 kubelet[2078]: I1213 02:04:05.515346 2078 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70294008-4610-4a1d-bdba-35e7e738842a-cilium-config-path\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:05.515575 kubelet[2078]: I1213 02:04:05.515364 2078 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-hostproc\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:05.515575 kubelet[2078]: I1213 02:04:05.515381 2078 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fec2f200-7466-47f6-8105-b3792d78219d-cilium-config-path\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:05.515852 kubelet[2078]: I1213 02:04:05.515400 2078 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-cni-path\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:05.515852 kubelet[2078]: I1213 02:04:05.515419 2078 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-txwvb\" (UniqueName: \"kubernetes.io/projected/70294008-4610-4a1d-bdba-35e7e738842a-kube-api-access-txwvb\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:05.515852 kubelet[2078]: I1213 02:04:05.515440 2078 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-c9njs\" (UniqueName: \"kubernetes.io/projected/fec2f200-7466-47f6-8105-b3792d78219d-kube-api-access-c9njs\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:05.515852 kubelet[2078]: I1213 02:04:05.515458 2078 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-xtables-lock\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:05.515852 kubelet[2078]: I1213 02:04:05.515477 2078 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70294008-4610-4a1d-bdba-35e7e738842a-clustermesh-secrets\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:05.515852 kubelet[2078]: I1213 02:04:05.515498 2078 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70294008-4610-4a1d-bdba-35e7e738842a-cilium-cgroup\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:06.022314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220-rootfs.mount: Deactivated successfully. Dec 13 02:04:06.022837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3a15bb3938e403eee7b2a7eb0df9bbb68e3e764f2f2e7b46bd2bdcc5914046b-rootfs.mount: Deactivated successfully. Dec 13 02:04:06.023120 systemd[1]: var-lib-kubelet-pods-fec2f200\x2d7466\x2d47f6\x2d8105\x2db3792d78219d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc9njs.mount: Deactivated successfully. Dec 13 02:04:06.023241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794-rootfs.mount: Deactivated successfully. Dec 13 02:04:06.023351 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a0486076f80cb8fc53be09b1dca7d8c9b87dce0711db0f9b4f84930ff8ad794-shm.mount: Deactivated successfully. Dec 13 02:04:06.023464 systemd[1]: var-lib-kubelet-pods-70294008\x2d4610\x2d4a1d\x2dbdba\x2d35e7e738842a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtxwvb.mount: Deactivated successfully. Dec 13 02:04:06.023568 systemd[1]: var-lib-kubelet-pods-70294008\x2d4610\x2d4a1d\x2dbdba\x2d35e7e738842a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:04:06.023695 systemd[1]: var-lib-kubelet-pods-70294008\x2d4610\x2d4a1d\x2dbdba\x2d35e7e738842a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:04:06.180362 kubelet[2078]: I1213 02:04:06.180314 2078 scope.go:117] "RemoveContainer" containerID="b856d3ee4324d61dcd7fa991662b81af5620f8353ebe3314047179c8174bffb3" Dec 13 02:04:06.183510 env[1219]: time="2024-12-13T02:04:06.182794318Z" level=info msg="RemoveContainer for \"b856d3ee4324d61dcd7fa991662b81af5620f8353ebe3314047179c8174bffb3\"" Dec 13 02:04:06.191212 systemd[1]: Removed slice kubepods-besteffort-podfec2f200_7466_47f6_8105_b3792d78219d.slice. Dec 13 02:04:06.195840 env[1219]: time="2024-12-13T02:04:06.195794292Z" level=info msg="RemoveContainer for \"b856d3ee4324d61dcd7fa991662b81af5620f8353ebe3314047179c8174bffb3\" returns successfully" Dec 13 02:04:06.198034 kubelet[2078]: I1213 02:04:06.198005 2078 scope.go:117] "RemoveContainer" containerID="3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220" Dec 13 02:04:06.198401 systemd[1]: Removed slice kubepods-burstable-pod70294008_4610_4a1d_bdba_35e7e738842a.slice. Dec 13 02:04:06.198568 systemd[1]: kubepods-burstable-pod70294008_4610_4a1d_bdba_35e7e738842a.slice: Consumed 9.585s CPU time. Dec 13 02:04:06.203767 env[1219]: time="2024-12-13T02:04:06.203707540Z" level=info msg="RemoveContainer for \"3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220\"" Dec 13 02:04:06.209470 env[1219]: time="2024-12-13T02:04:06.209424820Z" level=info msg="RemoveContainer for \"3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220\" returns successfully" Dec 13 02:04:06.211937 kubelet[2078]: I1213 02:04:06.211910 2078 scope.go:117] "RemoveContainer" containerID="34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680" Dec 13 02:04:06.213510 env[1219]: time="2024-12-13T02:04:06.213468469Z" level=info msg="RemoveContainer for \"34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680\"" Dec 13 02:04:06.230713 env[1219]: time="2024-12-13T02:04:06.230638353Z" level=info msg="RemoveContainer for \"34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680\" returns successfully" Dec 13 02:04:06.230939 kubelet[2078]: I1213 02:04:06.230913 2078 scope.go:117] "RemoveContainer" containerID="4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037" Dec 13 02:04:06.232752 env[1219]: time="2024-12-13T02:04:06.232706797Z" level=info msg="RemoveContainer for \"4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037\"" Dec 13 02:04:06.243464 env[1219]: time="2024-12-13T02:04:06.243405966Z" level=info msg="RemoveContainer for \"4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037\" returns successfully" Dec 13 02:04:06.243861 kubelet[2078]: I1213 02:04:06.243831 2078 scope.go:117] "RemoveContainer" containerID="8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb" Dec 13 02:04:06.245781 env[1219]: time="2024-12-13T02:04:06.245701109Z" level=info msg="RemoveContainer for \"8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb\"" Dec 13 02:04:06.250505 env[1219]: time="2024-12-13T02:04:06.250444565Z" level=info msg="RemoveContainer for \"8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb\" returns successfully" Dec 13 02:04:06.250896 kubelet[2078]: I1213 02:04:06.250864 2078 scope.go:117] "RemoveContainer" containerID="49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7" Dec 13 02:04:06.252680 env[1219]: time="2024-12-13T02:04:06.252638692Z" level=info msg="RemoveContainer for \"49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7\"" Dec 13 02:04:06.257098 env[1219]: time="2024-12-13T02:04:06.257047718Z" level=info msg="RemoveContainer for \"49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7\" returns successfully" Dec 13 02:04:06.257373 kubelet[2078]: I1213 02:04:06.257325 2078 scope.go:117] "RemoveContainer" containerID="3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220" Dec 13 02:04:06.257876 env[1219]: time="2024-12-13T02:04:06.257768545Z" level=error msg="ContainerStatus for \"3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220\": not found" Dec 13 02:04:06.258040 kubelet[2078]: E1213 02:04:06.258017 2078 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220\": not found" containerID="3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220" Dec 13 02:04:06.258183 kubelet[2078]: I1213 02:04:06.258158 2078 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220"} err="failed to get container status \"3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220\": rpc error: code = NotFound desc = an error occurred when try to find container \"3dbea36e3e15ba3d39126deef372db963d4a16408fcff2d8ea5e17ad64946220\": not found" Dec 13 02:04:06.258280 kubelet[2078]: I1213 02:04:06.258190 2078 scope.go:117] "RemoveContainer" containerID="34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680" Dec 13 02:04:06.258542 env[1219]: time="2024-12-13T02:04:06.258466788Z" level=error msg="ContainerStatus for \"34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680\": not found" Dec 13 02:04:06.258711 kubelet[2078]: E1213 02:04:06.258682 2078 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680\": not found" containerID="34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680" Dec 13 02:04:06.258790 kubelet[2078]: I1213 02:04:06.258726 2078 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680"} err="failed to get container status \"34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680\": rpc error: code = NotFound desc = an error occurred when try to find container \"34bea01e7d9c2dca0c675f94de978724f626b16709d47056d9b5a1fcaeb2b680\": not found" Dec 13 02:04:06.258790 kubelet[2078]: I1213 02:04:06.258744 2078 scope.go:117] "RemoveContainer" containerID="4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037" Dec 13 02:04:06.259067 env[1219]: time="2024-12-13T02:04:06.258986401Z" level=error msg="ContainerStatus for \"4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037\": not found" Dec 13 02:04:06.259213 kubelet[2078]: E1213 02:04:06.259188 2078 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037\": not found" containerID="4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037" Dec 13 02:04:06.259319 kubelet[2078]: I1213 02:04:06.259236 2078 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037"} err="failed to get container status \"4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037\": rpc error: code = NotFound desc = an error occurred when try to find container \"4597eda8ab56603a0d76062da3ab6fd28ff16eed804124222a3a267541dea037\": not found" Dec 13 02:04:06.259319 kubelet[2078]: I1213 02:04:06.259253 2078 scope.go:117] "RemoveContainer" containerID="8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb" Dec 13 02:04:06.259560 env[1219]: time="2024-12-13T02:04:06.259474414Z" level=error msg="ContainerStatus for \"8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb\": not found" Dec 13 02:04:06.259708 kubelet[2078]: E1213 02:04:06.259688 2078 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb\": not found" containerID="8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb" Dec 13 02:04:06.259800 kubelet[2078]: I1213 02:04:06.259727 2078 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb"} err="failed to get container status \"8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ee1316995740858cc92a1946b1b4fb1e40675a0cc8fc98b132c28add60dd9bb\": not found" Dec 13 02:04:06.259800 kubelet[2078]: I1213 02:04:06.259744 2078 scope.go:117] "RemoveContainer" containerID="49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7" Dec 13 02:04:06.260082 env[1219]: time="2024-12-13T02:04:06.260011501Z" level=error msg="ContainerStatus for \"49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7\": not found" Dec 13 02:04:06.260248 kubelet[2078]: E1213 02:04:06.260211 2078 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7\": not found" containerID="49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7" Dec 13 02:04:06.260372 kubelet[2078]: I1213 02:04:06.260253 2078 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7"} err="failed to get container status \"49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"49cc9e5d26b50221e3791204e05a689197ac2959027543cbc9e7f585a353a1f7\": not found" Dec 13 02:04:06.781180 kubelet[2078]: I1213 02:04:06.781117 2078 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="70294008-4610-4a1d-bdba-35e7e738842a" path="/var/lib/kubelet/pods/70294008-4610-4a1d-bdba-35e7e738842a/volumes" Dec 13 02:04:06.782184 kubelet[2078]: I1213 02:04:06.782131 2078 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fec2f200-7466-47f6-8105-b3792d78219d" path="/var/lib/kubelet/pods/fec2f200-7466-47f6-8105-b3792d78219d/volumes" Dec 13 02:04:06.986909 sshd[3631]: pam_unix(sshd:session): session closed for user core Dec 13 02:04:06.991415 systemd[1]: sshd@21-10.128.0.4:22-139.178.68.195:39342.service: Deactivated successfully. Dec 13 02:04:06.992588 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 02:04:06.993446 systemd-logind[1208]: Session 22 logged out. Waiting for processes to exit. Dec 13 02:04:06.995423 systemd-logind[1208]: Removed session 22. Dec 13 02:04:07.032745 systemd[1]: Started sshd@22-10.128.0.4:22-139.178.68.195:34352.service. Dec 13 02:04:07.326791 sshd[3796]: Accepted publickey for core from 139.178.68.195 port 34352 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:04:07.329122 sshd[3796]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:04:07.336940 systemd[1]: Started session-23.scope. Dec 13 02:04:07.338170 systemd-logind[1208]: New session 23 of user core. Dec 13 02:04:08.145437 kubelet[2078]: I1213 02:04:08.142963 2078 topology_manager.go:215] "Topology Admit Handler" podUID="7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" podNamespace="kube-system" podName="cilium-mtp78" Dec 13 02:04:08.145437 kubelet[2078]: E1213 02:04:08.143064 2078 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70294008-4610-4a1d-bdba-35e7e738842a" containerName="mount-cgroup" Dec 13 02:04:08.145437 kubelet[2078]: E1213 02:04:08.143081 2078 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70294008-4610-4a1d-bdba-35e7e738842a" containerName="apply-sysctl-overwrites" Dec 13 02:04:08.145437 kubelet[2078]: E1213 02:04:08.143112 2078 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fec2f200-7466-47f6-8105-b3792d78219d" containerName="cilium-operator" Dec 13 02:04:08.145437 kubelet[2078]: E1213 02:04:08.143125 2078 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70294008-4610-4a1d-bdba-35e7e738842a" containerName="mount-bpf-fs" Dec 13 02:04:08.145437 kubelet[2078]: E1213 02:04:08.143137 2078 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70294008-4610-4a1d-bdba-35e7e738842a" containerName="clean-cilium-state" Dec 13 02:04:08.145437 kubelet[2078]: E1213 02:04:08.143149 2078 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70294008-4610-4a1d-bdba-35e7e738842a" containerName="cilium-agent" Dec 13 02:04:08.145437 kubelet[2078]: I1213 02:04:08.143212 2078 memory_manager.go:354] "RemoveStaleState removing state" podUID="fec2f200-7466-47f6-8105-b3792d78219d" containerName="cilium-operator" Dec 13 02:04:08.145437 kubelet[2078]: I1213 02:04:08.143224 2078 memory_manager.go:354] "RemoveStaleState removing state" podUID="70294008-4610-4a1d-bdba-35e7e738842a" containerName="cilium-agent" Dec 13 02:04:08.146948 sshd[3796]: pam_unix(sshd:session): session closed for user core Dec 13 02:04:08.152934 systemd[1]: sshd@22-10.128.0.4:22-139.178.68.195:34352.service: Deactivated successfully. Dec 13 02:04:08.154104 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 02:04:08.162168 systemd-logind[1208]: Session 23 logged out. Waiting for processes to exit. Dec 13 02:04:08.165127 systemd[1]: Created slice kubepods-burstable-pod7cfe4a4e_409b_4209_a4d9_66bc45ce73d8.slice. Dec 13 02:04:08.166811 systemd-logind[1208]: Removed session 23. Dec 13 02:04:08.191947 systemd[1]: Started sshd@23-10.128.0.4:22-139.178.68.195:34356.service. Dec 13 02:04:08.233319 kubelet[2078]: I1213 02:04:08.233278 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-lib-modules\") pod \"cilium-mtp78\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " pod="kube-system/cilium-mtp78" Dec 13 02:04:08.233677 kubelet[2078]: I1213 02:04:08.233646 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-clustermesh-secrets\") pod \"cilium-mtp78\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " pod="kube-system/cilium-mtp78" Dec 13 02:04:08.233871 kubelet[2078]: I1213 02:04:08.233851 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-host-proc-sys-net\") pod \"cilium-mtp78\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " pod="kube-system/cilium-mtp78" Dec 13 02:04:08.234108 kubelet[2078]: I1213 02:04:08.234084 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cilium-cgroup\") pod \"cilium-mtp78\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " pod="kube-system/cilium-mtp78" Dec 13 02:04:08.234341 kubelet[2078]: I1213 02:04:08.234320 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cni-path\") pod \"cilium-mtp78\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " pod="kube-system/cilium-mtp78" Dec 13 02:04:08.234528 kubelet[2078]: I1213 02:04:08.234508 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-hubble-tls\") pod \"cilium-mtp78\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " pod="kube-system/cilium-mtp78" Dec 13 02:04:08.234700 kubelet[2078]: I1213 02:04:08.234682 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-hostproc\") pod \"cilium-mtp78\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " pod="kube-system/cilium-mtp78" Dec 13 02:04:08.234854 kubelet[2078]: I1213 02:04:08.234837 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cilium-ipsec-secrets\") pod \"cilium-mtp78\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " pod="kube-system/cilium-mtp78" Dec 13 02:04:08.235249 kubelet[2078]: I1213 02:04:08.235226 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-bpf-maps\") pod \"cilium-mtp78\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " pod="kube-system/cilium-mtp78" Dec 13 02:04:08.235436 kubelet[2078]: I1213 02:04:08.235417 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-xtables-lock\") pod \"cilium-mtp78\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " pod="kube-system/cilium-mtp78" Dec 13 02:04:08.235610 kubelet[2078]: I1213 02:04:08.235576 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cilium-run\") pod \"cilium-mtp78\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " pod="kube-system/cilium-mtp78" Dec 13 02:04:08.235780 kubelet[2078]: I1213 02:04:08.235763 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cilium-config-path\") pod \"cilium-mtp78\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " pod="kube-system/cilium-mtp78" Dec 13 02:04:08.235975 kubelet[2078]: I1213 02:04:08.235957 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-host-proc-sys-kernel\") pod \"cilium-mtp78\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " pod="kube-system/cilium-mtp78" Dec 13 02:04:08.236147 kubelet[2078]: I1213 02:04:08.236131 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-etc-cni-netd\") pod \"cilium-mtp78\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " pod="kube-system/cilium-mtp78" Dec 13 02:04:08.236304 kubelet[2078]: I1213 02:04:08.236289 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x67pg\" (UniqueName: \"kubernetes.io/projected/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-kube-api-access-x67pg\") pod \"cilium-mtp78\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " pod="kube-system/cilium-mtp78" Dec 13 02:04:08.479734 env[1219]: time="2024-12-13T02:04:08.479658986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mtp78,Uid:7cfe4a4e-409b-4209-a4d9-66bc45ce73d8,Namespace:kube-system,Attempt:0,}" Dec 13 02:04:08.511389 sshd[3807]: Accepted publickey for core from 139.178.68.195 port 34356 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:04:08.513550 env[1219]: time="2024-12-13T02:04:08.513442758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:04:08.513717 env[1219]: time="2024-12-13T02:04:08.513558735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:04:08.513717 env[1219]: time="2024-12-13T02:04:08.513623572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:04:08.513947 env[1219]: time="2024-12-13T02:04:08.513897509Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a56fd437b2bcd6720a95ab7ad076ef89ee0b40f2c1156c08e2e4ce9ff467c9dd pid=3821 runtime=io.containerd.runc.v2 Dec 13 02:04:08.514447 sshd[3807]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:04:08.523849 systemd[1]: Started session-24.scope. Dec 13 02:04:08.524698 systemd-logind[1208]: New session 24 of user core. Dec 13 02:04:08.547293 systemd[1]: Started cri-containerd-a56fd437b2bcd6720a95ab7ad076ef89ee0b40f2c1156c08e2e4ce9ff467c9dd.scope. Dec 13 02:04:08.590530 env[1219]: time="2024-12-13T02:04:08.590465926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mtp78,Uid:7cfe4a4e-409b-4209-a4d9-66bc45ce73d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a56fd437b2bcd6720a95ab7ad076ef89ee0b40f2c1156c08e2e4ce9ff467c9dd\"" Dec 13 02:04:08.596208 env[1219]: time="2024-12-13T02:04:08.595580163Z" level=info msg="CreateContainer within sandbox \"a56fd437b2bcd6720a95ab7ad076ef89ee0b40f2c1156c08e2e4ce9ff467c9dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:04:08.612864 env[1219]: time="2024-12-13T02:04:08.612812715Z" level=info msg="CreateContainer within sandbox \"a56fd437b2bcd6720a95ab7ad076ef89ee0b40f2c1156c08e2e4ce9ff467c9dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7771854c9aa0fa68816434f5e6978a95f6c5ad0209d197408d2cc882a427f15c\"" Dec 13 02:04:08.615210 env[1219]: time="2024-12-13T02:04:08.614898329Z" level=info msg="StartContainer for \"7771854c9aa0fa68816434f5e6978a95f6c5ad0209d197408d2cc882a427f15c\"" Dec 13 02:04:08.645336 systemd[1]: Started cri-containerd-7771854c9aa0fa68816434f5e6978a95f6c5ad0209d197408d2cc882a427f15c.scope. Dec 13 02:04:08.659130 systemd[1]: cri-containerd-7771854c9aa0fa68816434f5e6978a95f6c5ad0209d197408d2cc882a427f15c.scope: Deactivated successfully. Dec 13 02:04:08.683206 env[1219]: time="2024-12-13T02:04:08.683139521Z" level=info msg="shim disconnected" id=7771854c9aa0fa68816434f5e6978a95f6c5ad0209d197408d2cc882a427f15c Dec 13 02:04:08.683206 env[1219]: time="2024-12-13T02:04:08.683208876Z" level=warning msg="cleaning up after shim disconnected" id=7771854c9aa0fa68816434f5e6978a95f6c5ad0209d197408d2cc882a427f15c namespace=k8s.io Dec 13 02:04:08.683780 env[1219]: time="2024-12-13T02:04:08.683222789Z" level=info msg="cleaning up dead shim" Dec 13 02:04:08.698968 env[1219]: time="2024-12-13T02:04:08.698903413Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:04:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3887 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T02:04:08Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7771854c9aa0fa68816434f5e6978a95f6c5ad0209d197408d2cc882a427f15c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 02:04:08.699420 env[1219]: time="2024-12-13T02:04:08.699261521Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Dec 13 02:04:08.699806 env[1219]: time="2024-12-13T02:04:08.699747230Z" level=error msg="Failed to pipe stdout of container \"7771854c9aa0fa68816434f5e6978a95f6c5ad0209d197408d2cc882a427f15c\"" error="reading from a closed fifo" Dec 13 02:04:08.699922 env[1219]: time="2024-12-13T02:04:08.699859258Z" level=error msg="Failed to pipe stderr of container \"7771854c9aa0fa68816434f5e6978a95f6c5ad0209d197408d2cc882a427f15c\"" error="reading from a closed fifo" Dec 13 02:04:08.702565 env[1219]: time="2024-12-13T02:04:08.702496625Z" level=error msg="StartContainer for \"7771854c9aa0fa68816434f5e6978a95f6c5ad0209d197408d2cc882a427f15c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 02:04:08.704098 kubelet[2078]: E1213 02:04:08.703853 2078 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7771854c9aa0fa68816434f5e6978a95f6c5ad0209d197408d2cc882a427f15c" Dec 13 02:04:08.704098 kubelet[2078]: E1213 02:04:08.704021 2078 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 02:04:08.704098 kubelet[2078]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 02:04:08.704098 kubelet[2078]: rm /hostbin/cilium-mount Dec 13 02:04:08.704438 kubelet[2078]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-x67pg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-mtp78_kube-system(7cfe4a4e-409b-4209-a4d9-66bc45ce73d8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 02:04:08.704657 kubelet[2078]: E1213 02:04:08.704083 2078 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-mtp78" podUID="7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" Dec 13 02:04:08.829159 sshd[3807]: pam_unix(sshd:session): session closed for user core Dec 13 02:04:08.834902 systemd-logind[1208]: Session 24 logged out. Waiting for processes to exit. Dec 13 02:04:08.835210 systemd[1]: sshd@23-10.128.0.4:22-139.178.68.195:34356.service: Deactivated successfully. Dec 13 02:04:08.836295 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 02:04:08.837753 systemd-logind[1208]: Removed session 24. Dec 13 02:04:08.877779 systemd[1]: Started sshd@24-10.128.0.4:22-139.178.68.195:34368.service. Dec 13 02:04:09.176901 sshd[3904]: Accepted publickey for core from 139.178.68.195 port 34368 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:04:09.178737 sshd[3904]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:04:09.185814 systemd[1]: Started session-25.scope. Dec 13 02:04:09.186690 systemd-logind[1208]: New session 25 of user core. Dec 13 02:04:09.207958 env[1219]: time="2024-12-13T02:04:09.206754457Z" level=info msg="StopPodSandbox for \"a56fd437b2bcd6720a95ab7ad076ef89ee0b40f2c1156c08e2e4ce9ff467c9dd\"" Dec 13 02:04:09.207958 env[1219]: time="2024-12-13T02:04:09.207807463Z" level=info msg="Container to stop \"7771854c9aa0fa68816434f5e6978a95f6c5ad0209d197408d2cc882a427f15c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:04:09.224858 systemd[1]: cri-containerd-a56fd437b2bcd6720a95ab7ad076ef89ee0b40f2c1156c08e2e4ce9ff467c9dd.scope: Deactivated successfully. Dec 13 02:04:09.263133 env[1219]: time="2024-12-13T02:04:09.263067049Z" level=info msg="shim disconnected" id=a56fd437b2bcd6720a95ab7ad076ef89ee0b40f2c1156c08e2e4ce9ff467c9dd Dec 13 02:04:09.263587 env[1219]: time="2024-12-13T02:04:09.263550090Z" level=warning msg="cleaning up after shim disconnected" id=a56fd437b2bcd6720a95ab7ad076ef89ee0b40f2c1156c08e2e4ce9ff467c9dd namespace=k8s.io Dec 13 02:04:09.263757 env[1219]: time="2024-12-13T02:04:09.263733770Z" level=info msg="cleaning up dead shim" Dec 13 02:04:09.276510 env[1219]: time="2024-12-13T02:04:09.276451326Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:04:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3927 runtime=io.containerd.runc.v2\n" Dec 13 02:04:09.277081 env[1219]: time="2024-12-13T02:04:09.276924829Z" level=info msg="TearDown network for sandbox \"a56fd437b2bcd6720a95ab7ad076ef89ee0b40f2c1156c08e2e4ce9ff467c9dd\" successfully" Dec 13 02:04:09.277081 env[1219]: time="2024-12-13T02:04:09.277079035Z" level=info msg="StopPodSandbox for \"a56fd437b2bcd6720a95ab7ad076ef89ee0b40f2c1156c08e2e4ce9ff467c9dd\" returns successfully" Dec 13 02:04:09.345834 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a56fd437b2bcd6720a95ab7ad076ef89ee0b40f2c1156c08e2e4ce9ff467c9dd-shm.mount: Deactivated successfully. Dec 13 02:04:09.450520 kubelet[2078]: I1213 02:04:09.450477 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cilium-config-path\") pod \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " Dec 13 02:04:09.454914 kubelet[2078]: I1213 02:04:09.454878 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-host-proc-sys-kernel\") pod \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " Dec 13 02:04:09.455111 kubelet[2078]: I1213 02:04:09.454940 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-hubble-tls\") pod \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " Dec 13 02:04:09.455111 kubelet[2078]: I1213 02:04:09.454972 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-xtables-lock\") pod \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " Dec 13 02:04:09.455111 kubelet[2078]: I1213 02:04:09.455013 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cilium-run\") pod \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " Dec 13 02:04:09.455111 kubelet[2078]: I1213 02:04:09.455046 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-bpf-maps\") pod \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " Dec 13 02:04:09.455111 kubelet[2078]: I1213 02:04:09.455073 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-hostproc\") pod \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " Dec 13 02:04:09.455111 kubelet[2078]: I1213 02:04:09.455108 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cilium-cgroup\") pod \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " Dec 13 02:04:09.455452 kubelet[2078]: I1213 02:04:09.455142 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cni-path\") pod \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " Dec 13 02:04:09.455452 kubelet[2078]: I1213 02:04:09.455174 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-host-proc-sys-net\") pod \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " Dec 13 02:04:09.455452 kubelet[2078]: I1213 02:04:09.455205 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-lib-modules\") pod \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " Dec 13 02:04:09.455452 kubelet[2078]: I1213 02:04:09.455243 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-clustermesh-secrets\") pod \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " Dec 13 02:04:09.455452 kubelet[2078]: I1213 02:04:09.455280 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x67pg\" (UniqueName: \"kubernetes.io/projected/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-kube-api-access-x67pg\") pod \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " Dec 13 02:04:09.455452 kubelet[2078]: I1213 02:04:09.455318 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cilium-ipsec-secrets\") pod \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " Dec 13 02:04:09.455802 kubelet[2078]: I1213 02:04:09.455352 2078 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-etc-cni-netd\") pod \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\" (UID: \"7cfe4a4e-409b-4209-a4d9-66bc45ce73d8\") " Dec 13 02:04:09.455802 kubelet[2078]: I1213 02:04:09.455427 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" (UID: "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:09.455802 kubelet[2078]: I1213 02:04:09.454783 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" (UID: "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:04:09.456055 kubelet[2078]: I1213 02:04:09.456004 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" (UID: "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:09.456171 kubelet[2078]: I1213 02:04:09.456085 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cni-path" (OuterVolumeSpecName: "cni-path") pod "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" (UID: "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:09.456171 kubelet[2078]: I1213 02:04:09.456119 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" (UID: "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:09.456171 kubelet[2078]: I1213 02:04:09.456166 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" (UID: "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:09.456357 kubelet[2078]: I1213 02:04:09.456191 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" (UID: "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:09.456357 kubelet[2078]: I1213 02:04:09.456239 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-hostproc" (OuterVolumeSpecName: "hostproc") pod "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" (UID: "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:09.456357 kubelet[2078]: I1213 02:04:09.456289 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" (UID: "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:09.457905 kubelet[2078]: I1213 02:04:09.457854 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" (UID: "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:09.458058 kubelet[2078]: I1213 02:04:09.457922 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" (UID: "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:04:09.469205 kubelet[2078]: I1213 02:04:09.463712 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" (UID: "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:04:09.467524 systemd[1]: var-lib-kubelet-pods-7cfe4a4e\x2d409b\x2d4209\x2da4d9\x2d66bc45ce73d8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:04:09.471720 systemd[1]: var-lib-kubelet-pods-7cfe4a4e\x2d409b\x2d4209\x2da4d9\x2d66bc45ce73d8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:04:09.471896 systemd[1]: var-lib-kubelet-pods-7cfe4a4e\x2d409b\x2d4209\x2da4d9\x2d66bc45ce73d8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:04:09.477324 kubelet[2078]: I1213 02:04:09.477273 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" (UID: "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:04:09.478793 kubelet[2078]: I1213 02:04:09.478745 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" (UID: "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:04:09.481873 kubelet[2078]: I1213 02:04:09.481804 2078 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-kube-api-access-x67pg" (OuterVolumeSpecName: "kube-api-access-x67pg") pod "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" (UID: "7cfe4a4e-409b-4209-a4d9-66bc45ce73d8"). InnerVolumeSpecName "kube-api-access-x67pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:04:09.483062 systemd[1]: var-lib-kubelet-pods-7cfe4a4e\x2d409b\x2d4209\x2da4d9\x2d66bc45ce73d8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx67pg.mount: Deactivated successfully. Dec 13 02:04:09.556122 kubelet[2078]: I1213 02:04:09.556063 2078 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cilium-ipsec-secrets\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:09.556122 kubelet[2078]: I1213 02:04:09.556134 2078 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-etc-cni-netd\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:09.556432 kubelet[2078]: I1213 02:04:09.556156 2078 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-hubble-tls\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:09.556432 kubelet[2078]: I1213 02:04:09.556174 2078 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-xtables-lock\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:09.556432 kubelet[2078]: I1213 02:04:09.556192 2078 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cilium-run\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:09.556432 kubelet[2078]: I1213 02:04:09.556209 2078 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cilium-config-path\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:09.556432 kubelet[2078]: I1213 02:04:09.556228 2078 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-host-proc-sys-kernel\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:09.556432 kubelet[2078]: I1213 02:04:09.556246 2078 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-bpf-maps\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:09.556432 kubelet[2078]: I1213 02:04:09.556263 2078 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cilium-cgroup\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:09.556806 kubelet[2078]: I1213 02:04:09.556282 2078 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-hostproc\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:09.556806 kubelet[2078]: I1213 02:04:09.556298 2078 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-cni-path\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:09.556806 kubelet[2078]: I1213 02:04:09.556318 2078 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-lib-modules\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:09.556806 kubelet[2078]: I1213 02:04:09.556335 2078 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-clustermesh-secrets\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:09.556806 kubelet[2078]: I1213 02:04:09.556353 2078 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-host-proc-sys-net\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:09.556806 kubelet[2078]: I1213 02:04:09.556374 2078 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x67pg\" (UniqueName: \"kubernetes.io/projected/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8-kube-api-access-x67pg\") on node \"ci-3510-3-6-0974f7a80cb669e5f3e3.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:04:09.905401 kubelet[2078]: E1213 02:04:09.905261 2078 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:04:10.210806 kubelet[2078]: I1213 02:04:10.210769 2078 scope.go:117] "RemoveContainer" containerID="7771854c9aa0fa68816434f5e6978a95f6c5ad0209d197408d2cc882a427f15c" Dec 13 02:04:10.215419 env[1219]: time="2024-12-13T02:04:10.214976038Z" level=info msg="RemoveContainer for \"7771854c9aa0fa68816434f5e6978a95f6c5ad0209d197408d2cc882a427f15c\"" Dec 13 02:04:10.217947 systemd[1]: Removed slice kubepods-burstable-pod7cfe4a4e_409b_4209_a4d9_66bc45ce73d8.slice. Dec 13 02:04:10.221662 env[1219]: time="2024-12-13T02:04:10.221425777Z" level=info msg="RemoveContainer for \"7771854c9aa0fa68816434f5e6978a95f6c5ad0209d197408d2cc882a427f15c\" returns successfully" Dec 13 02:04:10.262501 kubelet[2078]: I1213 02:04:10.262427 2078 topology_manager.go:215] "Topology Admit Handler" podUID="f0c0e609-e2de-47b4-b57f-a3e43d603c6f" podNamespace="kube-system" podName="cilium-22k4w" Dec 13 02:04:10.262752 kubelet[2078]: E1213 02:04:10.262552 2078 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" containerName="mount-cgroup" Dec 13 02:04:10.262752 kubelet[2078]: I1213 02:04:10.262628 2078 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" containerName="mount-cgroup" Dec 13 02:04:10.273393 systemd[1]: Created slice kubepods-burstable-podf0c0e609_e2de_47b4_b57f_a3e43d603c6f.slice. Dec 13 02:04:10.360982 kubelet[2078]: I1213 02:04:10.360938 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f0c0e609-e2de-47b4-b57f-a3e43d603c6f-cilium-cgroup\") pod \"cilium-22k4w\" (UID: \"f0c0e609-e2de-47b4-b57f-a3e43d603c6f\") " pod="kube-system/cilium-22k4w" Dec 13 02:04:10.361314 kubelet[2078]: I1213 02:04:10.361290 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f0c0e609-e2de-47b4-b57f-a3e43d603c6f-host-proc-sys-kernel\") pod \"cilium-22k4w\" (UID: \"f0c0e609-e2de-47b4-b57f-a3e43d603c6f\") " pod="kube-system/cilium-22k4w" Dec 13 02:04:10.361553 kubelet[2078]: I1213 02:04:10.361529 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f0c0e609-e2de-47b4-b57f-a3e43d603c6f-clustermesh-secrets\") pod \"cilium-22k4w\" (UID: \"f0c0e609-e2de-47b4-b57f-a3e43d603c6f\") " pod="kube-system/cilium-22k4w" Dec 13 02:04:10.361689 kubelet[2078]: I1213 02:04:10.361623 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0c0e609-e2de-47b4-b57f-a3e43d603c6f-cilium-config-path\") pod \"cilium-22k4w\" (UID: \"f0c0e609-e2de-47b4-b57f-a3e43d603c6f\") " pod="kube-system/cilium-22k4w" Dec 13 02:04:10.361689 kubelet[2078]: I1213 02:04:10.361664 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f0c0e609-e2de-47b4-b57f-a3e43d603c6f-host-proc-sys-net\") pod \"cilium-22k4w\" (UID: \"f0c0e609-e2de-47b4-b57f-a3e43d603c6f\") " pod="kube-system/cilium-22k4w" Dec 13 02:04:10.361808 kubelet[2078]: I1213 02:04:10.361701 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f0c0e609-e2de-47b4-b57f-a3e43d603c6f-bpf-maps\") pod \"cilium-22k4w\" (UID: \"f0c0e609-e2de-47b4-b57f-a3e43d603c6f\") " pod="kube-system/cilium-22k4w" Dec 13 02:04:10.361808 kubelet[2078]: I1213 02:04:10.361738 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8bmk\" (UniqueName: \"kubernetes.io/projected/f0c0e609-e2de-47b4-b57f-a3e43d603c6f-kube-api-access-j8bmk\") pod \"cilium-22k4w\" (UID: \"f0c0e609-e2de-47b4-b57f-a3e43d603c6f\") " pod="kube-system/cilium-22k4w" Dec 13 02:04:10.361808 kubelet[2078]: I1213 02:04:10.361774 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f0c0e609-e2de-47b4-b57f-a3e43d603c6f-hostproc\") pod \"cilium-22k4w\" (UID: \"f0c0e609-e2de-47b4-b57f-a3e43d603c6f\") " pod="kube-system/cilium-22k4w" Dec 13 02:04:10.361979 kubelet[2078]: I1213 02:04:10.361811 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f0c0e609-e2de-47b4-b57f-a3e43d603c6f-cilium-run\") pod \"cilium-22k4w\" (UID: \"f0c0e609-e2de-47b4-b57f-a3e43d603c6f\") " pod="kube-system/cilium-22k4w" Dec 13 02:04:10.361979 kubelet[2078]: I1213 02:04:10.361848 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0c0e609-e2de-47b4-b57f-a3e43d603c6f-etc-cni-netd\") pod \"cilium-22k4w\" (UID: \"f0c0e609-e2de-47b4-b57f-a3e43d603c6f\") " pod="kube-system/cilium-22k4w" Dec 13 02:04:10.361979 kubelet[2078]: I1213 02:04:10.361884 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0c0e609-e2de-47b4-b57f-a3e43d603c6f-xtables-lock\") pod \"cilium-22k4w\" (UID: \"f0c0e609-e2de-47b4-b57f-a3e43d603c6f\") " pod="kube-system/cilium-22k4w" Dec 13 02:04:10.361979 kubelet[2078]: I1213 02:04:10.361925 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f0c0e609-e2de-47b4-b57f-a3e43d603c6f-cni-path\") pod \"cilium-22k4w\" (UID: \"f0c0e609-e2de-47b4-b57f-a3e43d603c6f\") " pod="kube-system/cilium-22k4w" Dec 13 02:04:10.361979 kubelet[2078]: I1213 02:04:10.361958 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0c0e609-e2de-47b4-b57f-a3e43d603c6f-lib-modules\") pod \"cilium-22k4w\" (UID: \"f0c0e609-e2de-47b4-b57f-a3e43d603c6f\") " pod="kube-system/cilium-22k4w" Dec 13 02:04:10.362256 kubelet[2078]: I1213 02:04:10.362004 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f0c0e609-e2de-47b4-b57f-a3e43d603c6f-cilium-ipsec-secrets\") pod \"cilium-22k4w\" (UID: \"f0c0e609-e2de-47b4-b57f-a3e43d603c6f\") " pod="kube-system/cilium-22k4w" Dec 13 02:04:10.362256 kubelet[2078]: I1213 02:04:10.362040 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f0c0e609-e2de-47b4-b57f-a3e43d603c6f-hubble-tls\") pod \"cilium-22k4w\" (UID: \"f0c0e609-e2de-47b4-b57f-a3e43d603c6f\") " pod="kube-system/cilium-22k4w" Dec 13 02:04:10.578465 env[1219]: time="2024-12-13T02:04:10.578307140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-22k4w,Uid:f0c0e609-e2de-47b4-b57f-a3e43d603c6f,Namespace:kube-system,Attempt:0,}" Dec 13 02:04:10.606552 env[1219]: time="2024-12-13T02:04:10.606454368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:04:10.606818 env[1219]: time="2024-12-13T02:04:10.606520567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:04:10.606818 env[1219]: time="2024-12-13T02:04:10.606538059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:04:10.606993 env[1219]: time="2024-12-13T02:04:10.606786204Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a7ed3dc67c4003b99d4b6d161935cb4c12dd7b50c03cf5abc13e66ed9de0ec5 pid=3963 runtime=io.containerd.runc.v2 Dec 13 02:04:10.633688 systemd[1]: Started cri-containerd-5a7ed3dc67c4003b99d4b6d161935cb4c12dd7b50c03cf5abc13e66ed9de0ec5.scope. Dec 13 02:04:10.665813 env[1219]: time="2024-12-13T02:04:10.665746023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-22k4w,Uid:f0c0e609-e2de-47b4-b57f-a3e43d603c6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a7ed3dc67c4003b99d4b6d161935cb4c12dd7b50c03cf5abc13e66ed9de0ec5\"" Dec 13 02:04:10.672162 env[1219]: time="2024-12-13T02:04:10.671802525Z" level=info msg="CreateContainer within sandbox \"5a7ed3dc67c4003b99d4b6d161935cb4c12dd7b50c03cf5abc13e66ed9de0ec5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:04:10.687969 env[1219]: time="2024-12-13T02:04:10.687901995Z" level=info msg="CreateContainer within sandbox \"5a7ed3dc67c4003b99d4b6d161935cb4c12dd7b50c03cf5abc13e66ed9de0ec5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"505acd9f20629d2f247102a59dfd5973e5cf9c72495d8e20c38479e85ada5e1e\"" Dec 13 02:04:10.690284 env[1219]: time="2024-12-13T02:04:10.688862662Z" level=info msg="StartContainer for \"505acd9f20629d2f247102a59dfd5973e5cf9c72495d8e20c38479e85ada5e1e\"" Dec 13 02:04:10.712923 systemd[1]: Started cri-containerd-505acd9f20629d2f247102a59dfd5973e5cf9c72495d8e20c38479e85ada5e1e.scope. Dec 13 02:04:10.759825 env[1219]: time="2024-12-13T02:04:10.759764672Z" level=info msg="StartContainer for \"505acd9f20629d2f247102a59dfd5973e5cf9c72495d8e20c38479e85ada5e1e\" returns successfully" Dec 13 02:04:10.769859 systemd[1]: cri-containerd-505acd9f20629d2f247102a59dfd5973e5cf9c72495d8e20c38479e85ada5e1e.scope: Deactivated successfully. Dec 13 02:04:10.781630 kubelet[2078]: I1213 02:04:10.781577 2078 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7cfe4a4e-409b-4209-a4d9-66bc45ce73d8" path="/var/lib/kubelet/pods/7cfe4a4e-409b-4209-a4d9-66bc45ce73d8/volumes" Dec 13 02:04:10.806359 env[1219]: time="2024-12-13T02:04:10.806290636Z" level=info msg="shim disconnected" id=505acd9f20629d2f247102a59dfd5973e5cf9c72495d8e20c38479e85ada5e1e Dec 13 02:04:10.806905 env[1219]: time="2024-12-13T02:04:10.806806509Z" level=warning msg="cleaning up after shim disconnected" id=505acd9f20629d2f247102a59dfd5973e5cf9c72495d8e20c38479e85ada5e1e namespace=k8s.io Dec 13 02:04:10.806905 env[1219]: time="2024-12-13T02:04:10.806887842Z" level=info msg="cleaning up dead shim" Dec 13 02:04:10.818905 env[1219]: time="2024-12-13T02:04:10.818856447Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:04:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4045 runtime=io.containerd.runc.v2\n" Dec 13 02:04:11.221233 env[1219]: time="2024-12-13T02:04:11.221173736Z" level=info msg="CreateContainer within sandbox \"5a7ed3dc67c4003b99d4b6d161935cb4c12dd7b50c03cf5abc13e66ed9de0ec5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:04:11.239941 env[1219]: time="2024-12-13T02:04:11.239873989Z" level=info msg="CreateContainer within sandbox \"5a7ed3dc67c4003b99d4b6d161935cb4c12dd7b50c03cf5abc13e66ed9de0ec5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d5b14caf9f2023dcdfa30a1e212df22b2f726928e776d81e5c6cf477de48226b\"" Dec 13 02:04:11.246866 env[1219]: time="2024-12-13T02:04:11.245201874Z" level=info msg="StartContainer for \"d5b14caf9f2023dcdfa30a1e212df22b2f726928e776d81e5c6cf477de48226b\"" Dec 13 02:04:11.276446 systemd[1]: Started cri-containerd-d5b14caf9f2023dcdfa30a1e212df22b2f726928e776d81e5c6cf477de48226b.scope. Dec 13 02:04:11.318861 env[1219]: time="2024-12-13T02:04:11.318799646Z" level=info msg="StartContainer for \"d5b14caf9f2023dcdfa30a1e212df22b2f726928e776d81e5c6cf477de48226b\" returns successfully" Dec 13 02:04:11.326168 systemd[1]: cri-containerd-d5b14caf9f2023dcdfa30a1e212df22b2f726928e776d81e5c6cf477de48226b.scope: Deactivated successfully. Dec 13 02:04:11.356861 env[1219]: time="2024-12-13T02:04:11.356783517Z" level=info msg="shim disconnected" id=d5b14caf9f2023dcdfa30a1e212df22b2f726928e776d81e5c6cf477de48226b Dec 13 02:04:11.356861 env[1219]: time="2024-12-13T02:04:11.356852180Z" level=warning msg="cleaning up after shim disconnected" id=d5b14caf9f2023dcdfa30a1e212df22b2f726928e776d81e5c6cf477de48226b namespace=k8s.io Dec 13 02:04:11.356861 env[1219]: time="2024-12-13T02:04:11.356866230Z" level=info msg="cleaning up dead shim" Dec 13 02:04:11.372269 env[1219]: time="2024-12-13T02:04:11.372211729Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:04:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4112 runtime=io.containerd.runc.v2\n" Dec 13 02:04:11.790231 kubelet[2078]: W1213 02:04:11.790149 2078 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cfe4a4e_409b_4209_a4d9_66bc45ce73d8.slice/cri-containerd-7771854c9aa0fa68816434f5e6978a95f6c5ad0209d197408d2cc882a427f15c.scope WatchSource:0}: container "7771854c9aa0fa68816434f5e6978a95f6c5ad0209d197408d2cc882a427f15c" in namespace "k8s.io": not found Dec 13 02:04:12.226236 env[1219]: time="2024-12-13T02:04:12.225732058Z" level=info msg="CreateContainer within sandbox \"5a7ed3dc67c4003b99d4b6d161935cb4c12dd7b50c03cf5abc13e66ed9de0ec5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:04:12.249017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1019874742.mount: Deactivated successfully. Dec 13 02:04:12.269724 env[1219]: time="2024-12-13T02:04:12.269645057Z" level=info msg="CreateContainer within sandbox \"5a7ed3dc67c4003b99d4b6d161935cb4c12dd7b50c03cf5abc13e66ed9de0ec5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f46ed32e23cd84c8008db228c6f9119b8e6ce8980669dd4b8c614848ca1d712f\"" Dec 13 02:04:12.271015 env[1219]: time="2024-12-13T02:04:12.270974567Z" level=info msg="StartContainer for \"f46ed32e23cd84c8008db228c6f9119b8e6ce8980669dd4b8c614848ca1d712f\"" Dec 13 02:04:12.300515 systemd[1]: Started cri-containerd-f46ed32e23cd84c8008db228c6f9119b8e6ce8980669dd4b8c614848ca1d712f.scope. Dec 13 02:04:12.350736 env[1219]: time="2024-12-13T02:04:12.350680029Z" level=info msg="StartContainer for \"f46ed32e23cd84c8008db228c6f9119b8e6ce8980669dd4b8c614848ca1d712f\" returns successfully" Dec 13 02:04:12.354669 systemd[1]: cri-containerd-f46ed32e23cd84c8008db228c6f9119b8e6ce8980669dd4b8c614848ca1d712f.scope: Deactivated successfully. Dec 13 02:04:12.388649 env[1219]: time="2024-12-13T02:04:12.388558921Z" level=info msg="shim disconnected" id=f46ed32e23cd84c8008db228c6f9119b8e6ce8980669dd4b8c614848ca1d712f Dec 13 02:04:12.388649 env[1219]: time="2024-12-13T02:04:12.388651774Z" level=warning msg="cleaning up after shim disconnected" id=f46ed32e23cd84c8008db228c6f9119b8e6ce8980669dd4b8c614848ca1d712f namespace=k8s.io Dec 13 02:04:12.389031 env[1219]: time="2024-12-13T02:04:12.388667292Z" level=info msg="cleaning up dead shim" Dec 13 02:04:12.401169 env[1219]: time="2024-12-13T02:04:12.401103585Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:04:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4171 runtime=io.containerd.runc.v2\n" Dec 13 02:04:13.231169 env[1219]: time="2024-12-13T02:04:13.231099615Z" level=info msg="CreateContainer within sandbox \"5a7ed3dc67c4003b99d4b6d161935cb4c12dd7b50c03cf5abc13e66ed9de0ec5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:04:13.251434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount656185854.mount: Deactivated successfully. Dec 13 02:04:13.266493 env[1219]: time="2024-12-13T02:04:13.266387472Z" level=info msg="CreateContainer within sandbox \"5a7ed3dc67c4003b99d4b6d161935cb4c12dd7b50c03cf5abc13e66ed9de0ec5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8ab4086aeb2112bb88a97591182e2455c170d90825c8c16d87f7b6d2fc657854\"" Dec 13 02:04:13.267424 env[1219]: time="2024-12-13T02:04:13.267353741Z" level=info msg="StartContainer for \"8ab4086aeb2112bb88a97591182e2455c170d90825c8c16d87f7b6d2fc657854\"" Dec 13 02:04:13.308005 systemd[1]: Started cri-containerd-8ab4086aeb2112bb88a97591182e2455c170d90825c8c16d87f7b6d2fc657854.scope. Dec 13 02:04:13.347584 systemd[1]: cri-containerd-8ab4086aeb2112bb88a97591182e2455c170d90825c8c16d87f7b6d2fc657854.scope: Deactivated successfully. Dec 13 02:04:13.350358 env[1219]: time="2024-12-13T02:04:13.350304400Z" level=info msg="StartContainer for \"8ab4086aeb2112bb88a97591182e2455c170d90825c8c16d87f7b6d2fc657854\" returns successfully" Dec 13 02:04:13.381844 env[1219]: time="2024-12-13T02:04:13.381779625Z" level=info msg="shim disconnected" id=8ab4086aeb2112bb88a97591182e2455c170d90825c8c16d87f7b6d2fc657854 Dec 13 02:04:13.381844 env[1219]: time="2024-12-13T02:04:13.381845738Z" level=warning msg="cleaning up after shim disconnected" id=8ab4086aeb2112bb88a97591182e2455c170d90825c8c16d87f7b6d2fc657854 namespace=k8s.io Dec 13 02:04:13.382208 env[1219]: time="2024-12-13T02:04:13.381860628Z" level=info msg="cleaning up dead shim" Dec 13 02:04:13.398966 env[1219]: time="2024-12-13T02:04:13.398896293Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:04:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4226 runtime=io.containerd.runc.v2\n" Dec 13 02:04:13.474817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ab4086aeb2112bb88a97591182e2455c170d90825c8c16d87f7b6d2fc657854-rootfs.mount: Deactivated successfully. Dec 13 02:04:13.777512 kubelet[2078]: E1213 02:04:13.777442 2078 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-7d7p7" podUID="a203c4af-7e31-4be7-83a0-775ef35d2c6c" Dec 13 02:04:14.237237 env[1219]: time="2024-12-13T02:04:14.237170576Z" level=info msg="CreateContainer within sandbox \"5a7ed3dc67c4003b99d4b6d161935cb4c12dd7b50c03cf5abc13e66ed9de0ec5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:04:14.267077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1071387022.mount: Deactivated successfully. Dec 13 02:04:14.272852 env[1219]: time="2024-12-13T02:04:14.272791511Z" level=info msg="CreateContainer within sandbox \"5a7ed3dc67c4003b99d4b6d161935cb4c12dd7b50c03cf5abc13e66ed9de0ec5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f05e636b56d3e38ab2466fa83a8e58a39d59e6ea11f22d8f98e10729adebfefd\"" Dec 13 02:04:14.274302 env[1219]: time="2024-12-13T02:04:14.274257541Z" level=info msg="StartContainer for \"f05e636b56d3e38ab2466fa83a8e58a39d59e6ea11f22d8f98e10729adebfefd\"" Dec 13 02:04:14.278425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount228604996.mount: Deactivated successfully. Dec 13 02:04:14.326881 systemd[1]: Started cri-containerd-f05e636b56d3e38ab2466fa83a8e58a39d59e6ea11f22d8f98e10729adebfefd.scope. Dec 13 02:04:14.387570 env[1219]: time="2024-12-13T02:04:14.387512205Z" level=info msg="StartContainer for \"f05e636b56d3e38ab2466fa83a8e58a39d59e6ea11f22d8f98e10729adebfefd\" returns successfully" Dec 13 02:04:14.872652 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:04:14.902909 kubelet[2078]: W1213 02:04:14.902855 2078 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0c0e609_e2de_47b4_b57f_a3e43d603c6f.slice/cri-containerd-505acd9f20629d2f247102a59dfd5973e5cf9c72495d8e20c38479e85ada5e1e.scope WatchSource:0}: task 505acd9f20629d2f247102a59dfd5973e5cf9c72495d8e20c38479e85ada5e1e not found: not found Dec 13 02:04:15.627989 systemd[1]: run-containerd-runc-k8s.io-f05e636b56d3e38ab2466fa83a8e58a39d59e6ea11f22d8f98e10729adebfefd-runc.gvRDPE.mount: Deactivated successfully. Dec 13 02:04:17.812358 systemd[1]: run-containerd-runc-k8s.io-f05e636b56d3e38ab2466fa83a8e58a39d59e6ea11f22d8f98e10729adebfefd-runc.PEydZz.mount: Deactivated successfully. Dec 13 02:04:18.012253 kubelet[2078]: W1213 02:04:18.012178 2078 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0c0e609_e2de_47b4_b57f_a3e43d603c6f.slice/cri-containerd-d5b14caf9f2023dcdfa30a1e212df22b2f726928e776d81e5c6cf477de48226b.scope WatchSource:0}: task d5b14caf9f2023dcdfa30a1e212df22b2f726928e776d81e5c6cf477de48226b not found: not found Dec 13 02:04:18.128551 systemd-networkd[1025]: lxc_health: Link UP Dec 13 02:04:18.159725 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:04:18.161260 systemd-networkd[1025]: lxc_health: Gained carrier Dec 13 02:04:18.612195 kubelet[2078]: I1213 02:04:18.612139 2078 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-22k4w" podStartSLOduration=8.612060178 podStartE2EDuration="8.612060178s" podCreationTimestamp="2024-12-13 02:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:04:15.263394072 +0000 UTC m=+130.810891756" watchObservedRunningTime="2024-12-13 02:04:18.612060178 +0000 UTC m=+134.159557961" Dec 13 02:04:19.810440 systemd-networkd[1025]: lxc_health: Gained IPv6LL Dec 13 02:04:20.151220 systemd[1]: run-containerd-runc-k8s.io-f05e636b56d3e38ab2466fa83a8e58a39d59e6ea11f22d8f98e10729adebfefd-runc.NW58dB.mount: Deactivated successfully. Dec 13 02:04:21.128009 kubelet[2078]: W1213 02:04:21.127881 2078 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0c0e609_e2de_47b4_b57f_a3e43d603c6f.slice/cri-containerd-f46ed32e23cd84c8008db228c6f9119b8e6ce8980669dd4b8c614848ca1d712f.scope WatchSource:0}: task f46ed32e23cd84c8008db228c6f9119b8e6ce8980669dd4b8c614848ca1d712f not found: not found Dec 13 02:04:22.423503 systemd[1]: run-containerd-runc-k8s.io-f05e636b56d3e38ab2466fa83a8e58a39d59e6ea11f22d8f98e10729adebfefd-runc.9CQ1NQ.mount: Deactivated successfully. Dec 13 02:04:24.238246 kubelet[2078]: W1213 02:04:24.238179 2078 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0c0e609_e2de_47b4_b57f_a3e43d603c6f.slice/cri-containerd-8ab4086aeb2112bb88a97591182e2455c170d90825c8c16d87f7b6d2fc657854.scope WatchSource:0}: task 8ab4086aeb2112bb88a97591182e2455c170d90825c8c16d87f7b6d2fc657854 not found: not found Dec 13 02:04:24.739386 sshd[3904]: pam_unix(sshd:session): session closed for user core Dec 13 02:04:24.743781 systemd[1]: sshd@24-10.128.0.4:22-139.178.68.195:34368.service: Deactivated successfully. Dec 13 02:04:24.744986 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 02:04:24.746040 systemd-logind[1208]: Session 25 logged out. Waiting for processes to exit. Dec 13 02:04:24.747416 systemd-logind[1208]: Removed session 25.