Dec 13 14:28:28.143981 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:28:28.144036 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:28:28.144054 kernel: BIOS-provided physical RAM map: Dec 13 14:28:28.144067 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 14:28:28.144079 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 14:28:28.144091 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 14:28:28.144112 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 14:28:28.144126 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 14:28:28.144139 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd276fff] usable Dec 13 14:28:28.144152 kernel: BIOS-e820: [mem 0x00000000bd277000-0x00000000bd280fff] ACPI data Dec 13 14:28:28.144166 kernel: BIOS-e820: [mem 0x00000000bd281000-0x00000000bf8ecfff] usable Dec 13 14:28:28.144179 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Dec 13 14:28:28.144192 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 14:28:28.144206 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 14:28:28.144227 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 14:28:28.144242 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 14:28:28.144257 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 14:28:28.144271 kernel: NX (Execute Disable) protection: active Dec 13 14:28:28.144285 kernel: efi: EFI v2.70 by EDK II Dec 13 14:28:28.144300 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd277018 Dec 13 14:28:28.144314 kernel: random: crng init done Dec 13 14:28:28.144329 kernel: SMBIOS 2.4 present. Dec 13 14:28:28.144348 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 14:28:28.144362 kernel: Hypervisor detected: KVM Dec 13 14:28:28.144377 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:28:28.144391 kernel: kvm-clock: cpu 0, msr 18f19a001, primary cpu clock Dec 13 14:28:28.144406 kernel: kvm-clock: using sched offset of 12739191459 cycles Dec 13 14:28:28.144423 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:28:28.144450 kernel: tsc: Detected 2299.998 MHz processor Dec 13 14:28:28.144465 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:28:28.144481 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:28:28.144496 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 14:28:28.144515 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:28:28.144531 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 14:28:28.144546 kernel: Using GB pages for direct mapping Dec 13 14:28:28.144561 kernel: Secure boot disabled Dec 13 14:28:28.144577 kernel: ACPI: Early table checksum verification disabled Dec 13 14:28:28.144593 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 14:28:28.144607 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 14:28:28.144624 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 14:28:28.144651 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 14:28:28.144668 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 14:28:28.144685 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 14:28:28.144701 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 14:28:28.144717 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 14:28:28.144733 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 14:28:28.144754 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 14:28:28.144770 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 14:28:28.144787 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 14:28:28.144803 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 14:28:28.144819 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 14:28:28.144836 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 14:28:28.144917 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 14:28:28.144935 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 14:28:28.144951 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 14:28:28.144972 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 14:28:28.144990 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 14:28:28.145005 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:28:28.145021 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:28:28.145037 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 14:28:28.145053 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 14:28:28.145069 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 14:28:28.145086 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 14:28:28.145102 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 14:28:28.145122 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Dec 13 14:28:28.145138 kernel: Zone ranges: Dec 13 14:28:28.145155 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:28:28.145171 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 14:28:28.145188 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 14:28:28.145205 kernel: Movable zone start for each node Dec 13 14:28:28.145221 kernel: Early memory node ranges Dec 13 14:28:28.145237 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 14:28:28.145253 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 14:28:28.145275 kernel: node 0: [mem 0x0000000000100000-0x00000000bd276fff] Dec 13 14:28:28.145290 kernel: node 0: [mem 0x00000000bd281000-0x00000000bf8ecfff] Dec 13 14:28:28.145306 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 14:28:28.145321 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 14:28:28.145337 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 14:28:28.145353 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:28:28.145368 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 14:28:28.145384 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 14:28:28.145400 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Dec 13 14:28:28.145420 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 14:28:28.145446 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 14:28:28.145462 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 14:28:28.145478 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:28:28.145494 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:28:28.145510 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:28:28.145526 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:28:28.145541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:28:28.145557 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:28:28.145577 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:28:28.145593 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:28:28.145609 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 14:28:28.145625 kernel: Booting paravirtualized kernel on KVM Dec 13 14:28:28.145641 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:28:28.145657 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:28:28.145673 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:28:28.145688 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:28:28.145704 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:28:28.145724 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:28:28.145741 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:28:28.145756 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Dec 13 14:28:28.145772 kernel: Policy zone: Normal Dec 13 14:28:28.145791 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:28:28.145808 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:28:28.145823 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 14:28:28.145839 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:28:28.145869 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:28:28.145891 kernel: Memory: 7515408K/7860544K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 344876K reserved, 0K cma-reserved) Dec 13 14:28:28.145907 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:28:28.145923 kernel: Kernel/User page tables isolation: enabled Dec 13 14:28:28.145940 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:28:28.145956 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:28:28.145972 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:28:28.145989 kernel: rcu: RCU event tracing is enabled. Dec 13 14:28:28.146006 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:28:28.146027 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:28:28.146059 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:28:28.146077 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:28:28.146098 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:28:28.146115 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 14:28:28.146132 kernel: Console: colour dummy device 80x25 Dec 13 14:28:28.146149 kernel: printk: console [ttyS0] enabled Dec 13 14:28:28.146166 kernel: ACPI: Core revision 20210730 Dec 13 14:28:28.146184 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:28:28.146201 kernel: x2apic enabled Dec 13 14:28:28.146222 kernel: Switched APIC routing to physical x2apic. Dec 13 14:28:28.146237 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 14:28:28.146253 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 14:28:28.146269 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 14:28:28.146287 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 14:28:28.146304 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 14:28:28.146321 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:28:28.146341 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 14:28:28.146357 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 14:28:28.146374 kernel: Spectre V2 : Mitigation: IBRS Dec 13 14:28:28.146391 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:28:28.146407 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:28:28.146424 kernel: RETBleed: Mitigation: IBRS Dec 13 14:28:28.146449 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 14:28:28.146466 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Dec 13 14:28:28.146483 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 14:28:28.146504 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 14:28:28.146520 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:28:28.146537 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:28:28.146553 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:28:28.146569 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:28:28.146586 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:28:28.146601 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 14:28:28.146617 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:28:28.146633 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:28:28.146655 kernel: LSM: Security Framework initializing Dec 13 14:28:28.146671 kernel: SELinux: Initializing. Dec 13 14:28:28.146688 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:28:28.146705 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:28:28.146721 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 14:28:28.146738 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 14:28:28.146755 kernel: signal: max sigframe size: 1776 Dec 13 14:28:28.146772 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:28:28.146788 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:28:28.146809 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:28:28.146826 kernel: x86: Booting SMP configuration: Dec 13 14:28:28.146843 kernel: .... node #0, CPUs: #1 Dec 13 14:28:28.146876 kernel: kvm-clock: cpu 1, msr 18f19a041, secondary cpu clock Dec 13 14:28:28.146894 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 14:28:28.146912 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:28:28.146929 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:28:28.146945 kernel: smpboot: Max logical packages: 1 Dec 13 14:28:28.146965 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 14:28:28.146983 kernel: devtmpfs: initialized Dec 13 14:28:28.147001 kernel: x86/mm: Memory block size: 128MB Dec 13 14:28:28.147018 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 14:28:28.147035 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:28:28.147051 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:28:28.147067 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:28:28.147083 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:28:28.147099 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:28:28.147120 kernel: audit: type=2000 audit(1734100106.790:1): state=initialized audit_enabled=0 res=1 Dec 13 14:28:28.147136 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:28:28.147153 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:28:28.147170 kernel: cpuidle: using governor menu Dec 13 14:28:28.147188 kernel: ACPI: bus type PCI registered Dec 13 14:28:28.147205 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:28:28.147223 kernel: dca service started, version 1.12.1 Dec 13 14:28:28.147237 kernel: PCI: Using configuration type 1 for base access Dec 13 14:28:28.147253 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:28:28.147274 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:28:28.147292 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:28:28.147310 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:28:28.147328 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:28:28.147346 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:28:28.147363 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:28:28.147381 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:28:28.147399 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:28:28.147417 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:28:28.147452 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 14:28:28.147470 kernel: ACPI: Interpreter enabled Dec 13 14:28:28.147486 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 14:28:28.147501 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:28:28.147516 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:28:28.147534 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 14:28:28.147550 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:28:28.147791 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:28:28.151068 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 14:28:28.151106 kernel: PCI host bridge to bus 0000:00 Dec 13 14:28:28.151285 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:28:28.151451 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:28:28.151611 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:28:28.151762 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 14:28:28.158977 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:28:28.160599 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 14:28:28.161108 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 14:28:28.161311 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 14:28:28.161510 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 14:28:28.161705 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 14:28:28.161910 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 14:28:28.162098 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 14:28:28.162292 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:28:28.162484 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 14:28:28.162661 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 14:28:28.162863 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:28:28.163047 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 14:28:28.163223 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 14:28:28.163253 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:28:28.163273 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:28:28.163292 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:28:28.163310 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:28:28.163328 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 14:28:28.163346 kernel: iommu: Default domain type: Translated Dec 13 14:28:28.163364 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:28:28.163382 kernel: vgaarb: loaded Dec 13 14:28:28.163400 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:28:28.163423 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:28:28.163449 kernel: PTP clock support registered Dec 13 14:28:28.163466 kernel: Registered efivars operations Dec 13 14:28:28.163484 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:28:28.163502 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:28:28.163520 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 14:28:28.163538 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 14:28:28.163555 kernel: e820: reserve RAM buffer [mem 0xbd277000-0xbfffffff] Dec 13 14:28:28.163572 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 14:28:28.163594 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 14:28:28.163618 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:28:28.163636 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:28:28.163654 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:28:28.163672 kernel: pnp: PnP ACPI init Dec 13 14:28:28.163690 kernel: pnp: PnP ACPI: found 7 devices Dec 13 14:28:28.163708 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:28:28.163726 kernel: NET: Registered PF_INET protocol family Dec 13 14:28:28.163744 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:28:28.163766 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 14:28:28.163785 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:28:28.163803 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:28:28.163821 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 14:28:28.163838 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 14:28:28.163870 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:28:28.163889 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:28:28.163914 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:28:28.163936 kernel: NET: Registered PF_XDP protocol family Dec 13 14:28:28.164109 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:28:28.164272 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:28:28.164440 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:28:28.164598 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 14:28:28.164778 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 14:28:28.164803 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:28:28.164827 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 14:28:28.164846 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 14:28:28.164876 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:28:28.164892 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 14:28:28.164907 kernel: clocksource: Switched to clocksource tsc Dec 13 14:28:28.164921 kernel: Initialise system trusted keyrings Dec 13 14:28:28.164935 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 14:28:28.164950 kernel: Key type asymmetric registered Dec 13 14:28:28.164965 kernel: Asymmetric key parser 'x509' registered Dec 13 14:28:28.164987 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:28:28.165002 kernel: io scheduler mq-deadline registered Dec 13 14:28:28.165017 kernel: io scheduler kyber registered Dec 13 14:28:28.165032 kernel: io scheduler bfq registered Dec 13 14:28:28.165046 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:28:28.165062 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 14:28:28.165258 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 14:28:28.165282 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 14:28:28.165471 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 14:28:28.165500 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 14:28:28.165671 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 14:28:28.165692 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:28:28.165710 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:28:28.165727 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 14:28:28.165745 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 14:28:28.165762 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 14:28:28.165964 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 14:28:28.165994 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:28:28.166010 kernel: i8042: Warning: Keylock active Dec 13 14:28:28.166027 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:28:28.166044 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:28:28.166227 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 14:28:28.166384 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 14:28:28.166550 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T14:28:27 UTC (1734100107) Dec 13 14:28:28.166707 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 14:28:28.166732 kernel: intel_pstate: CPU model not supported Dec 13 14:28:28.166750 kernel: pstore: Registered efi as persistent store backend Dec 13 14:28:28.166766 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:28:28.166784 kernel: Segment Routing with IPv6 Dec 13 14:28:28.166801 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:28:28.166818 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:28:28.166835 kernel: Key type dns_resolver registered Dec 13 14:28:28.166907 kernel: IPI shorthand broadcast: enabled Dec 13 14:28:28.166929 kernel: sched_clock: Marking stable (753112741, 126242400)->(889655338, -10300197) Dec 13 14:28:28.166952 kernel: registered taskstats version 1 Dec 13 14:28:28.166970 kernel: Loading compiled-in X.509 certificates Dec 13 14:28:28.166988 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:28:28.167005 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:28:28.167022 kernel: Key type .fscrypt registered Dec 13 14:28:28.167039 kernel: Key type fscrypt-provisioning registered Dec 13 14:28:28.167057 kernel: pstore: Using crash dump compression: deflate Dec 13 14:28:28.167075 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:28:28.167091 kernel: ima: No architecture policies found Dec 13 14:28:28.167112 kernel: clk: Disabling unused clocks Dec 13 14:28:28.167130 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:28:28.167147 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:28:28.167165 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:28:28.167182 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:28:28.167199 kernel: Run /init as init process Dec 13 14:28:28.167216 kernel: with arguments: Dec 13 14:28:28.167233 kernel: /init Dec 13 14:28:28.167249 kernel: with environment: Dec 13 14:28:28.167270 kernel: HOME=/ Dec 13 14:28:28.167286 kernel: TERM=linux Dec 13 14:28:28.167302 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:28:28.167323 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:28:28.167345 systemd[1]: Detected virtualization kvm. Dec 13 14:28:28.167363 systemd[1]: Detected architecture x86-64. Dec 13 14:28:28.167380 systemd[1]: Running in initrd. Dec 13 14:28:28.167401 systemd[1]: No hostname configured, using default hostname. Dec 13 14:28:28.167419 systemd[1]: Hostname set to . Dec 13 14:28:28.167445 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:28:28.167462 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:28:28.167480 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:28:28.167497 systemd[1]: Reached target cryptsetup.target. Dec 13 14:28:28.167515 systemd[1]: Reached target paths.target. Dec 13 14:28:28.167532 systemd[1]: Reached target slices.target. Dec 13 14:28:28.167553 systemd[1]: Reached target swap.target. Dec 13 14:28:28.167570 systemd[1]: Reached target timers.target. Dec 13 14:28:28.167590 systemd[1]: Listening on iscsid.socket. Dec 13 14:28:28.167608 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:28:28.167626 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:28:28.167644 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:28:28.167662 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:28:28.167680 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:28:28.167702 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:28:28.167720 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:28:28.167760 systemd[1]: Reached target sockets.target. Dec 13 14:28:28.167783 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:28:28.167801 systemd[1]: Finished network-cleanup.service. Dec 13 14:28:28.167820 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:28:28.167838 systemd[1]: Starting systemd-journald.service... Dec 13 14:28:28.167873 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:28:28.167891 kernel: audit: type=1334 audit(1734100108.130:2): prog-id=6 op=LOAD Dec 13 14:28:28.167909 systemd[1]: Starting systemd-resolved.service... Dec 13 14:28:28.167928 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:28:28.167946 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:28:28.167965 kernel: audit: type=1130 audit(1734100108.163:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.167991 systemd-journald[190]: Journal started Dec 13 14:28:28.168084 systemd-journald[190]: Runtime Journal (/run/log/journal/02ebb58a5a9fad6552d4401c0f1a749e) is 8.0M, max 148.8M, 140.8M free. Dec 13 14:28:28.130000 audit: BPF prog-id=6 op=LOAD Dec 13 14:28:28.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.177233 systemd[1]: Started systemd-journald.service. Dec 13 14:28:28.177281 kernel: audit: type=1130 audit(1734100108.171:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.169392 systemd-modules-load[191]: Inserted module 'overlay' Dec 13 14:28:28.187097 kernel: audit: type=1130 audit(1734100108.178:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.173292 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:28:28.196091 kernel: audit: type=1130 audit(1734100108.189:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.180298 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:28:28.192532 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:28:28.200219 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:28:28.221248 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:28:28.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.225878 kernel: audit: type=1130 audit(1734100108.219:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.229536 systemd-resolved[192]: Positive Trust Anchors: Dec 13 14:28:28.229961 systemd-resolved[192]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:28:28.230129 systemd-resolved[192]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:28:28.236104 systemd-resolved[192]: Defaulting to hostname 'linux'. Dec 13 14:28:28.237829 systemd[1]: Started systemd-resolved.service. Dec 13 14:28:28.252375 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:28:28.252426 kernel: audit: type=1130 audit(1734100108.250:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.252353 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:28:28.274200 kernel: audit: type=1130 audit(1734100108.257:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.274246 kernel: Bridge firewalling registered Dec 13 14:28:28.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.259096 systemd[1]: Reached target nss-lookup.target. Dec 13 14:28:28.262546 systemd-modules-load[191]: Inserted module 'br_netfilter' Dec 13 14:28:28.267362 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:28:28.290727 dracut-cmdline[205]: dracut-dracut-053 Dec 13 14:28:28.293873 kernel: SCSI subsystem initialized Dec 13 14:28:28.293977 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:28:28.312467 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:28:28.312527 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:28:28.314014 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:28:28.318845 systemd-modules-load[191]: Inserted module 'dm_multipath' Dec 13 14:28:28.320811 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:28:28.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.334936 kernel: audit: type=1130 audit(1734100108.329:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.332259 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:28:28.345928 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:28:28.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.393904 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:28:28.413892 kernel: iscsi: registered transport (tcp) Dec 13 14:28:28.441234 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:28:28.441346 kernel: QLogic iSCSI HBA Driver Dec 13 14:28:28.488012 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:28:28.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.489968 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:28:28.547914 kernel: raid6: avx2x4 gen() 22766 MB/s Dec 13 14:28:28.564902 kernel: raid6: avx2x4 xor() 6385 MB/s Dec 13 14:28:28.581901 kernel: raid6: avx2x2 gen() 23397 MB/s Dec 13 14:28:28.598900 kernel: raid6: avx2x2 xor() 18577 MB/s Dec 13 14:28:28.615895 kernel: raid6: avx2x1 gen() 21060 MB/s Dec 13 14:28:28.632894 kernel: raid6: avx2x1 xor() 16154 MB/s Dec 13 14:28:28.649897 kernel: raid6: sse2x4 gen() 10316 MB/s Dec 13 14:28:28.666895 kernel: raid6: sse2x4 xor() 6281 MB/s Dec 13 14:28:28.683891 kernel: raid6: sse2x2 gen() 10872 MB/s Dec 13 14:28:28.700890 kernel: raid6: sse2x2 xor() 7405 MB/s Dec 13 14:28:28.717893 kernel: raid6: sse2x1 gen() 9725 MB/s Dec 13 14:28:28.735267 kernel: raid6: sse2x1 xor() 5172 MB/s Dec 13 14:28:28.735306 kernel: raid6: using algorithm avx2x2 gen() 23397 MB/s Dec 13 14:28:28.735331 kernel: raid6: .... xor() 18577 MB/s, rmw enabled Dec 13 14:28:28.735965 kernel: raid6: using avx2x2 recovery algorithm Dec 13 14:28:28.750897 kernel: xor: automatically using best checksumming function avx Dec 13 14:28:28.858894 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:28:28.871430 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:28:28.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.875000 audit: BPF prog-id=7 op=LOAD Dec 13 14:28:28.875000 audit: BPF prog-id=8 op=LOAD Dec 13 14:28:28.878021 systemd[1]: Starting systemd-udevd.service... Dec 13 14:28:28.895960 systemd-udevd[388]: Using default interface naming scheme 'v252'. Dec 13 14:28:28.903228 systemd[1]: Started systemd-udevd.service. Dec 13 14:28:28.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.907828 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:28:28.926961 dracut-pre-trigger[393]: rd.md=0: removing MD RAID activation Dec 13 14:28:28.964994 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:28:28.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:28.966553 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:28:29.034398 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:28:29.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:29.108888 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:28:29.143882 kernel: scsi host0: Virtio SCSI HBA Dec 13 14:28:29.198878 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 14:28:29.206879 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:28:29.211877 kernel: AES CTR mode by8 optimization enabled Dec 13 14:28:29.254425 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 14:28:29.268927 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 14:28:29.269185 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 14:28:29.269403 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 14:28:29.269636 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 14:28:29.269844 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:28:29.269886 kernel: GPT:17805311 != 25165823 Dec 13 14:28:29.269908 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:28:29.269931 kernel: GPT:17805311 != 25165823 Dec 13 14:28:29.269963 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:28:29.269985 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:28:29.270010 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 14:28:29.315887 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (437) Dec 13 14:28:29.325958 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:28:29.346001 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:28:29.360946 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:28:29.387937 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:28:29.397154 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:28:29.414415 systemd[1]: Starting disk-uuid.service... Dec 13 14:28:29.426920 disk-uuid[507]: Primary Header is updated. Dec 13 14:28:29.426920 disk-uuid[507]: Secondary Entries is updated. Dec 13 14:28:29.426920 disk-uuid[507]: Secondary Header is updated. Dec 13 14:28:29.457981 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:28:29.467879 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:28:29.489883 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:28:30.484883 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:28:30.485080 disk-uuid[508]: The operation has completed successfully. Dec 13 14:28:30.557326 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:28:30.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:30.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:30.557490 systemd[1]: Finished disk-uuid.service. Dec 13 14:28:30.576721 systemd[1]: Starting verity-setup.service... Dec 13 14:28:30.604881 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:28:30.679947 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:28:30.688340 systemd[1]: Finished verity-setup.service. Dec 13 14:28:30.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:30.705275 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:28:30.806510 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:28:30.806308 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:28:30.814284 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:28:30.815586 systemd[1]: Starting ignition-setup.service... Dec 13 14:28:30.870263 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:28:30.870313 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:28:30.870339 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:28:30.870365 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:28:30.859616 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:28:30.891115 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:28:30.909876 systemd[1]: Finished ignition-setup.service. Dec 13 14:28:30.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:30.911970 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:28:30.984398 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:28:30.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:30.992000 audit: BPF prog-id=9 op=LOAD Dec 13 14:28:30.996194 systemd[1]: Starting systemd-networkd.service... Dec 13 14:28:31.033220 systemd-networkd[682]: lo: Link UP Dec 13 14:28:31.033237 systemd-networkd[682]: lo: Gained carrier Dec 13 14:28:31.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:31.034484 systemd-networkd[682]: Enumeration completed Dec 13 14:28:31.034968 systemd-networkd[682]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:28:31.035118 systemd[1]: Started systemd-networkd.service. Dec 13 14:28:31.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:31.038084 systemd-networkd[682]: eth0: Link UP Dec 13 14:28:31.118053 iscsid[691]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:28:31.118053 iscsid[691]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 14:28:31.118053 iscsid[691]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:28:31.118053 iscsid[691]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:28:31.118053 iscsid[691]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:28:31.118053 iscsid[691]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:28:31.118053 iscsid[691]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:28:31.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:31.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:31.038092 systemd-networkd[682]: eth0: Gained carrier Dec 13 14:28:31.178784 ignition[614]: Ignition 2.14.0 Dec 13 14:28:31.048985 systemd-networkd[682]: eth0: DHCPv4 address 10.128.0.25/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 14:28:31.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:31.178796 ignition[614]: Stage: fetch-offline Dec 13 14:28:31.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:31.050490 systemd[1]: Reached target network.target. Dec 13 14:28:31.178936 ignition[614]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:31.066325 systemd[1]: Starting iscsiuio.service... Dec 13 14:28:31.178997 ignition[614]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:28:31.090193 systemd[1]: Started iscsiuio.service. Dec 13 14:28:31.200483 ignition[614]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:28:31.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:31.098803 systemd[1]: Starting iscsid.service... Dec 13 14:28:31.200724 ignition[614]: parsed url from cmdline: "" Dec 13 14:28:31.111343 systemd[1]: Started iscsid.service. Dec 13 14:28:31.200733 ignition[614]: no config URL provided Dec 13 14:28:31.126877 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:28:31.200742 ignition[614]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:28:31.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:31.147715 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:28:31.200756 ignition[614]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:28:31.168441 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:28:31.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:31.200767 ignition[614]: failed to fetch config: resource requires networking Dec 13 14:28:31.212057 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:28:31.201398 ignition[614]: Ignition finished successfully Dec 13 14:28:31.229010 systemd[1]: Reached target remote-fs.target. Dec 13 14:28:31.325173 ignition[706]: Ignition 2.14.0 Dec 13 14:28:31.248499 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:28:31.325184 ignition[706]: Stage: fetch Dec 13 14:28:31.273485 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:28:31.325348 ignition[706]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:31.287447 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:28:31.325387 ignition[706]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:28:31.310872 systemd[1]: Starting ignition-fetch.service... Dec 13 14:28:31.334728 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:28:31.347637 unknown[706]: fetched base config from "system" Dec 13 14:28:31.334967 ignition[706]: parsed url from cmdline: "" Dec 13 14:28:31.347659 unknown[706]: fetched base config from "system" Dec 13 14:28:31.334980 ignition[706]: no config URL provided Dec 13 14:28:31.347676 unknown[706]: fetched user config from "gcp" Dec 13 14:28:31.334991 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:28:31.352124 systemd[1]: Finished ignition-fetch.service. Dec 13 14:28:31.335004 ignition[706]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:28:31.365699 systemd[1]: Starting ignition-kargs.service... Dec 13 14:28:31.335047 ignition[706]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 14:28:31.404504 systemd[1]: Finished ignition-kargs.service. Dec 13 14:28:31.341413 ignition[706]: GET result: OK Dec 13 14:28:31.421560 systemd[1]: Starting ignition-disks.service... Dec 13 14:28:31.341528 ignition[706]: parsing config with SHA512: b96306805a0d8489826a2d0d45ccc722401a27c09dd4a767de8c5cf9fe4539aef8f12a1486e18ca2a6db1057a007f971d7a27acce4adb0266aa60d44be2397e2 Dec 13 14:28:31.445987 systemd[1]: Finished ignition-disks.service. Dec 13 14:28:31.350225 ignition[706]: fetch: fetch complete Dec 13 14:28:31.453432 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:28:31.350235 ignition[706]: fetch: fetch passed Dec 13 14:28:31.475071 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:28:31.350327 ignition[706]: Ignition finished successfully Dec 13 14:28:31.490023 systemd[1]: Reached target local-fs.target. Dec 13 14:28:31.380248 ignition[712]: Ignition 2.14.0 Dec 13 14:28:31.504033 systemd[1]: Reached target sysinit.target. Dec 13 14:28:31.380263 ignition[712]: Stage: kargs Dec 13 14:28:31.517019 systemd[1]: Reached target basic.target. Dec 13 14:28:31.380422 ignition[712]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:31.518601 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:28:31.380454 ignition[712]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:28:31.388172 ignition[712]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:28:31.389834 ignition[712]: kargs: kargs passed Dec 13 14:28:31.389911 ignition[712]: Ignition finished successfully Dec 13 14:28:31.435370 ignition[718]: Ignition 2.14.0 Dec 13 14:28:31.435379 ignition[718]: Stage: disks Dec 13 14:28:31.435536 ignition[718]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:31.435570 ignition[718]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:28:31.443195 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:28:31.444760 ignition[718]: disks: disks passed Dec 13 14:28:31.444819 ignition[718]: Ignition finished successfully Dec 13 14:28:31.557581 systemd-fsck[726]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 14:28:31.783070 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:28:31.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:31.792556 systemd[1]: Mounting sysroot.mount... Dec 13 14:28:31.822071 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:28:31.820161 systemd[1]: Mounted sysroot.mount. Dec 13 14:28:31.829243 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:28:31.849452 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:28:31.866541 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:28:31.866638 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:28:31.866795 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:28:31.880733 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:28:31.952068 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (732) Dec 13 14:28:31.952117 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:28:31.952140 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:28:31.952166 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:28:31.905436 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:28:31.973067 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:28:31.929923 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:28:31.975445 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:28:31.998071 initrd-setup-root[737]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:28:32.008027 initrd-setup-root[761]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:28:32.018020 initrd-setup-root[771]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:28:32.027988 initrd-setup-root[779]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:28:32.056219 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:28:32.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:32.057769 systemd[1]: Starting ignition-mount.service... Dec 13 14:28:32.078309 systemd[1]: Starting sysroot-boot.service... Dec 13 14:28:32.092327 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:28:32.092531 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:28:32.119027 ignition[798]: INFO : Ignition 2.14.0 Dec 13 14:28:32.119027 ignition[798]: INFO : Stage: mount Dec 13 14:28:32.119027 ignition[798]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:32.119027 ignition[798]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:28:32.176169 kernel: kauditd_printk_skb: 25 callbacks suppressed Dec 13 14:28:32.176207 kernel: audit: type=1130 audit(1734100112.132:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:32.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:32.127113 systemd[1]: Finished sysroot-boot.service. Dec 13 14:28:32.228078 kernel: audit: type=1130 audit(1734100112.199:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:32.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:32.228204 ignition[798]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:28:32.228204 ignition[798]: INFO : mount: mount passed Dec 13 14:28:32.228204 ignition[798]: INFO : Ignition finished successfully Dec 13 14:28:32.300047 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (807) Dec 13 14:28:32.300102 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:28:32.300129 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:28:32.300152 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:28:32.300174 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:28:32.134515 systemd[1]: Finished ignition-mount.service. Dec 13 14:28:32.203112 systemd[1]: Starting ignition-files.service... Dec 13 14:28:32.239614 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:28:32.257244 systemd-networkd[682]: eth0: Gained IPv6LL Dec 13 14:28:32.338011 ignition[826]: INFO : Ignition 2.14.0 Dec 13 14:28:32.338011 ignition[826]: INFO : Stage: files Dec 13 14:28:32.338011 ignition[826]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:32.338011 ignition[826]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:28:32.338011 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:28:32.300759 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:28:32.403010 ignition[826]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:28:32.403010 ignition[826]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:28:32.403010 ignition[826]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:28:32.403010 ignition[826]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:28:32.403010 ignition[826]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:28:32.403010 ignition[826]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:28:32.403010 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:28:32.403010 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:28:32.403010 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:28:32.403010 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:28:32.371262 unknown[826]: wrote ssh authorized keys file for user: core Dec 13 14:28:32.547014 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 14:28:32.626561 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:28:32.654066 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (826) Dec 13 14:28:32.654114 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/hosts" Dec 13 14:28:32.654114 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:28:32.654114 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(6): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2216448768" Dec 13 14:28:32.654114 ignition[826]: CRITICAL : files: createFilesystemsFiles: createFiles: op(5): op(6): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2216448768": device or resource busy Dec 13 14:28:32.654114 ignition[826]: ERROR : files: createFilesystemsFiles: createFiles: op(5): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2216448768", trying btrfs: device or resource busy Dec 13 14:28:32.654114 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2216448768" Dec 13 14:28:32.654114 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2216448768" Dec 13 14:28:32.654114 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [started] unmounting "/mnt/oem2216448768" Dec 13 14:28:32.654114 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [finished] unmounting "/mnt/oem2216448768" Dec 13 14:28:32.654114 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/hosts" Dec 13 14:28:32.654114 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:28:32.654114 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:28:32.654114 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 14:28:32.654114 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:28:32.880052 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem157455783" Dec 13 14:28:32.880052 ignition[826]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem157455783": device or resource busy Dec 13 14:28:32.880052 ignition[826]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem157455783", trying btrfs: device or resource busy Dec 13 14:28:32.880052 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem157455783" Dec 13 14:28:32.880052 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem157455783" Dec 13 14:28:32.880052 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem157455783" Dec 13 14:28:32.880052 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem157455783" Dec 13 14:28:32.880052 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 14:28:32.880052 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:28:32.880052 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 14:28:33.052072 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Dec 13 14:28:33.145429 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:28:33.145429 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:28:33.176034 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:28:33.176034 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:28:33.176034 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:28:33.176034 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:28:33.176034 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:28:33.176034 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:28:33.176034 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:28:33.176034 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:28:33.176034 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:28:33.176034 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 14:28:33.176034 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:28:33.176034 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1647023113" Dec 13 14:28:33.176034 ignition[826]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1647023113": device or resource busy Dec 13 14:28:33.176034 ignition[826]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1647023113", trying btrfs: device or resource busy Dec 13 14:28:33.167965 systemd[1]: mnt-oem1647023113.mount: Deactivated successfully. Dec 13 14:28:33.432053 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1647023113" Dec 13 14:28:33.432053 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1647023113" Dec 13 14:28:33.432053 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem1647023113" Dec 13 14:28:33.432053 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem1647023113" Dec 13 14:28:33.432053 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 14:28:33.432053 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:28:33.432053 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:28:33.432053 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): GET result: OK Dec 13 14:28:33.710371 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:28:33.710371 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(19): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 14:28:33.746137 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(19): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:28:33.746137 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2628216963" Dec 13 14:28:33.746137 ignition[826]: CRITICAL : files: createFilesystemsFiles: createFiles: op(19): op(1a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2628216963": device or resource busy Dec 13 14:28:33.746137 ignition[826]: ERROR : files: createFilesystemsFiles: createFiles: op(19): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2628216963", trying btrfs: device or resource busy Dec 13 14:28:33.746137 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2628216963" Dec 13 14:28:33.746137 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2628216963" Dec 13 14:28:33.746137 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1c): [started] unmounting "/mnt/oem2628216963" Dec 13 14:28:33.746137 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(19): op(1c): [finished] unmounting "/mnt/oem2628216963" Dec 13 14:28:33.746137 ignition[826]: INFO : files: createFilesystemsFiles: createFiles: op(19): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 14:28:33.746137 ignition[826]: INFO : files: op(1d): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:28:33.746137 ignition[826]: INFO : files: op(1d): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:28:33.746137 ignition[826]: INFO : files: op(1e): [started] processing unit "oem-gce.service" Dec 13 14:28:33.746137 ignition[826]: INFO : files: op(1e): [finished] processing unit "oem-gce.service" Dec 13 14:28:33.746137 ignition[826]: INFO : files: op(1f): [started] processing unit "oem-gce-enable-oslogin.service" Dec 13 14:28:33.746137 ignition[826]: INFO : files: op(1f): [finished] processing unit "oem-gce-enable-oslogin.service" Dec 13 14:28:33.746137 ignition[826]: INFO : files: op(20): [started] processing unit "containerd.service" Dec 13 14:28:33.746137 ignition[826]: INFO : files: op(20): op(21): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:28:34.228175 kernel: audit: type=1130 audit(1734100113.752:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.228224 kernel: audit: type=1130 audit(1734100113.836:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.228244 kernel: audit: type=1130 audit(1734100113.885:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.228259 kernel: audit: type=1131 audit(1734100113.885:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.228287 kernel: audit: type=1130 audit(1734100114.002:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.228302 kernel: audit: type=1131 audit(1734100114.002:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.228317 kernel: audit: type=1130 audit(1734100114.151:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:33.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:33.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:33.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:33.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:33.730692 systemd[1]: Finished ignition-files.service. Dec 13 14:28:34.243165 ignition[826]: INFO : files: op(20): op(21): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:28:34.243165 ignition[826]: INFO : files: op(20): [finished] processing unit "containerd.service" Dec 13 14:28:34.243165 ignition[826]: INFO : files: op(22): [started] processing unit "prepare-helm.service" Dec 13 14:28:34.243165 ignition[826]: INFO : files: op(22): op(23): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:28:34.243165 ignition[826]: INFO : files: op(22): op(23): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:28:34.243165 ignition[826]: INFO : files: op(22): [finished] processing unit "prepare-helm.service" Dec 13 14:28:34.243165 ignition[826]: INFO : files: op(24): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 14:28:34.243165 ignition[826]: INFO : files: op(24): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 14:28:34.243165 ignition[826]: INFO : files: op(25): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:28:34.243165 ignition[826]: INFO : files: op(25): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:28:34.243165 ignition[826]: INFO : files: op(26): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:28:34.243165 ignition[826]: INFO : files: op(26): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:28:34.243165 ignition[826]: INFO : files: op(27): [started] setting preset to enabled for "oem-gce.service" Dec 13 14:28:34.243165 ignition[826]: INFO : files: op(27): [finished] setting preset to enabled for "oem-gce.service" Dec 13 14:28:34.243165 ignition[826]: INFO : files: createResultFile: createFiles: op(28): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:28:34.243165 ignition[826]: INFO : files: createResultFile: createFiles: op(28): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:28:34.243165 ignition[826]: INFO : files: files passed Dec 13 14:28:34.243165 ignition[826]: INFO : Ignition finished successfully Dec 13 14:28:34.584055 kernel: audit: type=1131 audit(1734100114.294:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:33.765128 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:28:33.790196 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:28:34.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.626170 initrd-setup-root-after-ignition[849]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:28:33.791403 systemd[1]: Starting ignition-quench.service... Dec 13 14:28:34.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:33.814505 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:28:34.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:33.838612 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:28:34.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:33.838760 systemd[1]: Finished ignition-quench.service. Dec 13 14:28:33.887497 systemd[1]: Reached target ignition-complete.target. Dec 13 14:28:33.950675 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:28:34.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:33.995184 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:28:34.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.775151 ignition[864]: INFO : Ignition 2.14.0 Dec 13 14:28:34.775151 ignition[864]: INFO : Stage: umount Dec 13 14:28:34.775151 ignition[864]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:34.775151 ignition[864]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:28:34.775151 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:28:34.775151 ignition[864]: INFO : umount: umount passed Dec 13 14:28:34.775151 ignition[864]: INFO : Ignition finished successfully Dec 13 14:28:34.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:33.995313 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:28:34.004257 systemd[1]: Reached target initrd-fs.target. Dec 13 14:28:34.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.082251 systemd[1]: Reached target initrd.target. Dec 13 14:28:34.113295 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:28:34.114822 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:28:34.135437 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:28:34.154922 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:28:34.217046 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:28:34.236319 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:28:34.251314 systemd[1]: Stopped target timers.target. Dec 13 14:28:35.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.277261 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:28:35.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.277477 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:28:34.296523 systemd[1]: Stopped target initrd.target. Dec 13 14:28:34.327428 systemd[1]: Stopped target basic.target. Dec 13 14:28:35.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.349378 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:28:35.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.389382 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:28:35.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:35.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:35.112000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:28:34.423401 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:28:34.436497 systemd[1]: Stopped target remote-fs.target. Dec 13 14:28:34.475419 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:28:35.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.489494 systemd[1]: Stopped target sysinit.target. Dec 13 14:28:35.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.525448 systemd[1]: Stopped target local-fs.target. Dec 13 14:28:35.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.561385 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:28:34.576386 systemd[1]: Stopped target swap.target. Dec 13 14:28:34.591282 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:28:35.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.591517 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:28:34.609447 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:28:34.634279 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:28:35.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.634492 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:28:35.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.656456 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:28:35.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.656666 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:28:34.673400 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:28:34.673589 systemd[1]: Stopped ignition-files.service. Dec 13 14:28:35.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.691070 systemd[1]: Stopping ignition-mount.service... Dec 13 14:28:35.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.709459 systemd[1]: Stopping iscsiuio.service... Dec 13 14:28:35.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:35.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:34.724999 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:28:34.725290 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:28:35.415000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:28:35.415000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:28:35.415000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:28:34.742774 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:28:35.416000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:28:35.416000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:28:34.757043 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:28:34.757376 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:28:35.452061 systemd-journald[190]: Received SIGTERM from PID 1 (n/a). Dec 13 14:28:35.452135 iscsid[691]: iscsid shutting down. Dec 13 14:28:34.767381 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:28:34.767551 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:28:34.787774 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:28:34.788975 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:28:34.789106 systemd[1]: Stopped iscsiuio.service. Dec 13 14:28:34.791757 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:28:34.791930 systemd[1]: Stopped ignition-mount.service. Dec 13 14:28:34.803749 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:28:34.803939 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:28:34.827703 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:28:34.827839 systemd[1]: Stopped ignition-disks.service. Dec 13 14:28:34.854096 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:28:34.854187 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:28:34.873060 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:28:34.873147 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:28:34.888044 systemd[1]: Stopped target network.target. Dec 13 14:28:34.902009 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:28:34.902114 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:28:34.916142 systemd[1]: Stopped target paths.target. Dec 13 14:28:34.930004 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:28:34.931963 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:28:34.945123 systemd[1]: Stopped target slices.target. Dec 13 14:28:34.963103 systemd[1]: Stopped target sockets.target. Dec 13 14:28:34.971207 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:28:34.971248 systemd[1]: Closed iscsid.socket. Dec 13 14:28:34.997206 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:28:34.997278 systemd[1]: Closed iscsiuio.socket. Dec 13 14:28:35.004270 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:28:35.004352 systemd[1]: Stopped ignition-setup.service. Dec 13 14:28:35.017313 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:28:35.017390 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:28:35.038416 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:28:35.041939 systemd-networkd[682]: eth0: DHCPv6 lease lost Dec 13 14:28:35.458000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:28:35.054282 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:28:35.062725 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:28:35.062896 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:28:35.083925 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:28:35.084086 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:28:35.098833 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:28:35.098989 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:28:35.115669 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:28:35.115731 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:28:35.131380 systemd[1]: Stopping network-cleanup.service... Dec 13 14:28:35.144981 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:28:35.145128 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:28:35.160155 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:28:35.160240 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:28:35.176282 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:28:35.176348 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:28:35.191245 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:28:35.212734 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:28:35.213469 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:28:35.213640 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:28:35.228920 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:28:35.229025 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:28:35.244070 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:28:35.244136 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:28:35.259059 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:28:35.259169 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:28:35.274157 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:28:35.274244 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:28:35.290148 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:28:35.290249 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:28:35.306501 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:28:35.332140 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:28:35.332249 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:28:35.348908 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:28:35.349067 systemd[1]: Stopped network-cleanup.service. Dec 13 14:28:35.363570 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:28:35.363718 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:28:35.378434 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:28:35.394674 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:28:35.412538 systemd[1]: Switching root. Dec 13 14:28:35.462668 systemd-journald[190]: Journal stopped Dec 13 14:28:40.155090 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:28:40.155296 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:28:40.155332 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:28:40.155359 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:28:40.155386 kernel: SELinux: policy capability open_perms=1 Dec 13 14:28:40.155415 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:28:40.155442 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:28:40.155474 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:28:40.155500 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:28:40.155533 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:28:40.155564 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:28:40.155602 systemd[1]: Successfully loaded SELinux policy in 112.222ms. Dec 13 14:28:40.155651 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.857ms. Dec 13 14:28:40.155683 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:28:40.155712 systemd[1]: Detected virtualization kvm. Dec 13 14:28:40.155739 systemd[1]: Detected architecture x86-64. Dec 13 14:28:40.155766 systemd[1]: Detected first boot. Dec 13 14:28:40.155795 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:28:40.155822 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:28:40.155870 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:28:40.155900 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:28:40.155944 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:28:40.155974 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:28:40.156010 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:28:40.156038 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:28:40.156065 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:28:40.156099 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:28:40.156126 systemd[1]: Created slice system-getty.slice. Dec 13 14:28:40.156156 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:28:40.156184 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:28:40.156214 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:28:40.156241 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:28:40.156269 systemd[1]: Created slice user.slice. Dec 13 14:28:40.156296 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:28:40.156323 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:28:40.156357 systemd[1]: Set up automount boot.automount. Dec 13 14:28:40.156384 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:28:40.156411 systemd[1]: Reached target integritysetup.target. Dec 13 14:28:40.156438 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:28:40.156465 systemd[1]: Reached target remote-fs.target. Dec 13 14:28:40.156494 systemd[1]: Reached target slices.target. Dec 13 14:28:40.156527 systemd[1]: Reached target swap.target. Dec 13 14:28:40.156555 systemd[1]: Reached target torcx.target. Dec 13 14:28:40.156588 systemd[1]: Reached target veritysetup.target. Dec 13 14:28:40.156615 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:28:40.156642 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:28:40.156669 kernel: kauditd_printk_skb: 49 callbacks suppressed Dec 13 14:28:40.156697 kernel: audit: type=1400 audit(1734100119.624:88): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:28:40.156723 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:28:40.156752 kernel: audit: type=1335 audit(1734100119.624:89): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:28:40.156779 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:28:40.156811 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:28:40.156838 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:28:40.156879 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:28:40.156907 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:28:40.156935 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:28:40.156962 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:28:40.156990 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:28:40.157017 systemd[1]: Mounting media.mount... Dec 13 14:28:40.157046 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:40.157073 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:28:40.157105 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:28:40.157133 systemd[1]: Mounting tmp.mount... Dec 13 14:28:40.157160 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:28:40.157188 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:28:40.157215 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:28:40.157243 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:28:40.157270 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:28:40.157298 systemd[1]: Starting modprobe@drm.service... Dec 13 14:28:40.157325 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:28:40.157365 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:28:40.157392 systemd[1]: Starting modprobe@loop.service... Dec 13 14:28:40.157418 kernel: fuse: init (API version 7.34) Dec 13 14:28:40.157445 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:28:40.157473 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 14:28:40.157500 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 14:28:40.157533 kernel: loop: module loaded Dec 13 14:28:40.157559 systemd[1]: Starting systemd-journald.service... Dec 13 14:28:40.157591 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:28:40.157618 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:28:40.157646 kernel: audit: type=1305 audit(1734100120.102:90): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:28:40.157673 kernel: audit: type=1300 audit(1734100120.102:90): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffec7394fd0 a2=4000 a3=7ffec739506c items=0 ppid=1 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:28:40.157699 kernel: audit: type=1327 audit(1734100120.102:90): proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:28:40.157736 systemd-journald[1026]: Journal started Dec 13 14:28:40.157890 systemd-journald[1026]: Runtime Journal (/run/log/journal/02ebb58a5a9fad6552d4401c0f1a749e) is 8.0M, max 148.8M, 140.8M free. Dec 13 14:28:39.624000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:28:39.624000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:28:40.102000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:28:40.102000 audit[1026]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffec7394fd0 a2=4000 a3=7ffec739506c items=0 ppid=1 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:28:40.102000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:28:40.174907 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:28:40.189931 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:28:40.210420 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:40.219897 systemd[1]: Started systemd-journald.service. Dec 13 14:28:40.230353 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:28:40.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.254911 kernel: audit: type=1130 audit(1734100120.226:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.260115 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:28:40.267245 systemd[1]: Mounted media.mount. Dec 13 14:28:40.274400 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:28:40.284185 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:28:40.294185 systemd[1]: Mounted tmp.mount. Dec 13 14:28:40.301695 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:28:40.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.310643 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:28:40.335012 kernel: audit: type=1130 audit(1734100120.308:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.341782 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:28:40.342158 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:28:40.364898 kernel: audit: type=1130 audit(1734100120.339:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.373688 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:28:40.374042 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:28:40.417421 kernel: audit: type=1130 audit(1734100120.371:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.417584 kernel: audit: type=1131 audit(1734100120.371:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.426645 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:28:40.426972 systemd[1]: Finished modprobe@drm.service. Dec 13 14:28:40.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.435632 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:28:40.435972 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:28:40.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.445542 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:28:40.445813 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:28:40.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.454519 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:28:40.454845 systemd[1]: Finished modprobe@loop.service. Dec 13 14:28:40.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.463741 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:28:40.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.472681 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:28:40.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.481569 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:28:40.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.490549 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:28:40.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.499732 systemd[1]: Reached target network-pre.target. Dec 13 14:28:40.509765 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:28:40.520045 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:28:40.527014 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:28:40.530448 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:28:40.539692 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:28:40.548023 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:28:40.549971 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:28:40.558028 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:28:40.560016 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:28:40.562021 systemd-journald[1026]: Time spent on flushing to /var/log/journal/02ebb58a5a9fad6552d4401c0f1a749e is 72.678ms for 1098 entries. Dec 13 14:28:40.562021 systemd-journald[1026]: System Journal (/var/log/journal/02ebb58a5a9fad6552d4401c0f1a749e) is 8.0M, max 584.8M, 576.8M free. Dec 13 14:28:40.674198 systemd-journald[1026]: Received client request to flush runtime journal. Dec 13 14:28:40.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.577210 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:28:40.586273 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:28:40.676033 udevadm[1051]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:28:40.597302 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:28:40.606102 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:28:40.614464 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:28:40.626732 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:28:40.635730 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:28:40.651791 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:28:40.662351 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:28:40.675547 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:28:40.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:40.718240 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:28:40.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:41.284288 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:28:41.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:41.294997 systemd[1]: Starting systemd-udevd.service... Dec 13 14:28:41.320182 systemd-udevd[1062]: Using default interface naming scheme 'v252'. Dec 13 14:28:41.362637 systemd[1]: Started systemd-udevd.service. Dec 13 14:28:41.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:41.375894 systemd[1]: Starting systemd-networkd.service... Dec 13 14:28:41.392131 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:28:41.437734 systemd[1]: Found device dev-ttyS0.device. Dec 13 14:28:41.501437 systemd[1]: Started systemd-userdbd.service. Dec 13 14:28:41.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:41.563915 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:28:41.641410 systemd-networkd[1073]: lo: Link UP Dec 13 14:28:41.641425 systemd-networkd[1073]: lo: Gained carrier Dec 13 14:28:41.642269 systemd-networkd[1073]: Enumeration completed Dec 13 14:28:41.642459 systemd-networkd[1073]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:28:41.642506 systemd[1]: Started systemd-networkd.service. Dec 13 14:28:41.645013 systemd-networkd[1073]: eth0: Link UP Dec 13 14:28:41.645210 systemd-networkd[1073]: eth0: Gained carrier Dec 13 14:28:41.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:41.644000 audit[1071]: AVC avc: denied { confidentiality } for pid=1071 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:28:41.658030 systemd-networkd[1073]: eth0: DHCPv4 address 10.128.0.25/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 14:28:41.669915 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:28:41.644000 audit[1071]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5616e61fbe60 a1=337fc a2=7f1a44af3bc5 a3=5 items=110 ppid=1062 pid=1071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:28:41.644000 audit: CWD cwd="/" Dec 13 14:28:41.644000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=1 name=(null) inode=13928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=2 name=(null) inode=13928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=3 name=(null) inode=13929 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=4 name=(null) inode=13928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=5 name=(null) inode=13930 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=6 name=(null) inode=13928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=7 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=8 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=9 name=(null) inode=13932 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=10 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=11 name=(null) inode=13933 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=12 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=13 name=(null) inode=13934 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=14 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=15 name=(null) inode=13935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=16 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=17 name=(null) inode=13936 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=18 name=(null) inode=13928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=19 name=(null) inode=13942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=20 name=(null) inode=13942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=21 name=(null) inode=13943 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=22 name=(null) inode=13942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=23 name=(null) inode=13944 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=24 name=(null) inode=13942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=25 name=(null) inode=13945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=26 name=(null) inode=13942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=27 name=(null) inode=13946 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=28 name=(null) inode=13942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=29 name=(null) inode=13947 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=30 name=(null) inode=13928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=31 name=(null) inode=13948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=32 name=(null) inode=13948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=33 name=(null) inode=13949 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=34 name=(null) inode=13948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=35 name=(null) inode=13950 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=36 name=(null) inode=13948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=37 name=(null) inode=13951 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=38 name=(null) inode=13948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=39 name=(null) inode=13952 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=40 name=(null) inode=13948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=41 name=(null) inode=13953 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=42 name=(null) inode=13928 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=43 name=(null) inode=13954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=44 name=(null) inode=13954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=45 name=(null) inode=13955 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=46 name=(null) inode=13954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=47 name=(null) inode=13956 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=48 name=(null) inode=13954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=49 name=(null) inode=13957 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=50 name=(null) inode=13954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=51 name=(null) inode=13958 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=52 name=(null) inode=13954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=53 name=(null) inode=13959 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=55 name=(null) inode=13960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=56 name=(null) inode=13960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=57 name=(null) inode=13961 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=58 name=(null) inode=13960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=59 name=(null) inode=13962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=60 name=(null) inode=13960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=61 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=62 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=63 name=(null) inode=13964 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=64 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=65 name=(null) inode=13965 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=66 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=67 name=(null) inode=13966 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=68 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=69 name=(null) inode=13967 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=70 name=(null) inode=13963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=71 name=(null) inode=13968 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=72 name=(null) inode=13960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=73 name=(null) inode=13969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=74 name=(null) inode=13969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=75 name=(null) inode=13970 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=76 name=(null) inode=13969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=77 name=(null) inode=13971 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=78 name=(null) inode=13969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=79 name=(null) inode=13972 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=80 name=(null) inode=13969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=81 name=(null) inode=13973 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=82 name=(null) inode=13969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=83 name=(null) inode=13974 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=84 name=(null) inode=13960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=85 name=(null) inode=13975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=86 name=(null) inode=13975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=87 name=(null) inode=13976 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=88 name=(null) inode=13975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=89 name=(null) inode=13977 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=90 name=(null) inode=13975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=91 name=(null) inode=13978 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=92 name=(null) inode=13975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=93 name=(null) inode=13979 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=94 name=(null) inode=13975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=95 name=(null) inode=13980 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=96 name=(null) inode=13960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=97 name=(null) inode=13981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=98 name=(null) inode=13981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=99 name=(null) inode=13982 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=100 name=(null) inode=13981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=101 name=(null) inode=13983 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=102 name=(null) inode=13981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=103 name=(null) inode=13984 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=104 name=(null) inode=13981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=105 name=(null) inode=13985 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=106 name=(null) inode=13981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=107 name=(null) inode=13986 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PATH item=109 name=(null) inode=13987 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:41.644000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:28:41.708874 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1087) Dec 13 14:28:41.723925 kernel: EDAC MC: Ver: 3.0.0 Dec 13 14:28:41.747909 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 13 14:28:41.783222 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 14:28:41.800903 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 13 14:28:41.804098 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 14:28:41.807932 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 14:28:41.808014 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:28:41.832742 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:28:41.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:41.843304 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:28:41.871495 lvm[1100]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:28:41.904937 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:28:41.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:41.913457 systemd[1]: Reached target cryptsetup.target. Dec 13 14:28:41.923876 systemd[1]: Starting lvm2-activation.service... Dec 13 14:28:41.931427 lvm[1102]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:28:41.956868 systemd[1]: Finished lvm2-activation.service. Dec 13 14:28:41.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:41.965451 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:28:41.974047 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:28:41.974103 systemd[1]: Reached target local-fs.target. Dec 13 14:28:41.982022 systemd[1]: Reached target machines.target. Dec 13 14:28:41.992330 systemd[1]: Starting ldconfig.service... Dec 13 14:28:41.999977 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:28:42.000084 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:28:42.002386 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:28:42.010962 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:28:42.023386 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:28:42.034273 systemd[1]: Starting systemd-sysext.service... Dec 13 14:28:42.035364 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1105 (bootctl) Dec 13 14:28:42.038198 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:28:42.059634 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:28:42.066098 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:28:42.066570 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:28:42.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.070558 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:28:42.100899 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:28:42.201838 systemd-fsck[1117]: fsck.fat 4.2 (2021-01-31) Dec 13 14:28:42.201838 systemd-fsck[1117]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 14:28:42.203323 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:28:42.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.215978 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:28:42.217497 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:28:42.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.232046 systemd[1]: Mounting boot.mount... Dec 13 14:28:42.254507 systemd[1]: Mounted boot.mount. Dec 13 14:28:42.281617 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:28:42.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.300111 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:28:42.329493 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:28:42.351915 (sd-sysext)[1127]: Using extensions 'kubernetes'. Dec 13 14:28:42.354981 (sd-sysext)[1127]: Merged extensions into '/usr'. Dec 13 14:28:42.387468 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:42.390493 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:28:42.398707 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:28:42.403354 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:28:42.414346 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:28:42.424844 systemd[1]: Starting modprobe@loop.service... Dec 13 14:28:42.432417 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:28:42.432696 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:28:42.433355 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:42.440001 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:28:42.447632 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:28:42.447954 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:28:42.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.456673 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:28:42.456993 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:28:42.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.465677 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:28:42.466128 systemd[1]: Finished modprobe@loop.service. Dec 13 14:28:42.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.475843 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:28:42.476054 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:28:42.480845 systemd[1]: Finished systemd-sysext.service. Dec 13 14:28:42.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.491146 systemd[1]: Starting ensure-sysext.service... Dec 13 14:28:42.500527 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:28:42.509358 systemd[1]: Reloading. Dec 13 14:28:42.541703 systemd-tmpfiles[1141]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:28:42.546045 systemd-tmpfiles[1141]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:28:42.558615 systemd-tmpfiles[1141]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:28:42.664919 ldconfig[1104]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:28:42.666614 /usr/lib/systemd/system-generators/torcx-generator[1161]: time="2024-12-13T14:28:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:28:42.666658 /usr/lib/systemd/system-generators/torcx-generator[1161]: time="2024-12-13T14:28:42Z" level=info msg="torcx already run" Dec 13 14:28:42.817008 systemd-networkd[1073]: eth0: Gained IPv6LL Dec 13 14:28:42.831546 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:28:42.831896 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:28:42.870564 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:28:42.955461 systemd[1]: Finished ldconfig.service. Dec 13 14:28:42.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.964989 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:28:42.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:42.978624 systemd[1]: Starting audit-rules.service... Dec 13 14:28:42.988418 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:28:43.000113 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 14:28:43.012600 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:28:43.024112 systemd[1]: Starting systemd-resolved.service... Dec 13 14:28:43.033952 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:28:43.044660 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:28:43.054069 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:28:43.056000 audit[1241]: SYSTEM_BOOT pid=1241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:28:43.060000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:28:43.060000 audit[1245]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe1795da70 a2=420 a3=0 items=0 ppid=1213 pid=1245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:28:43.060000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:28:43.062602 augenrules[1245]: No rules Dec 13 14:28:43.071268 systemd[1]: Finished audit-rules.service. Dec 13 14:28:43.078627 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 14:28:43.079065 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 14:28:43.088989 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:28:43.106236 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:28:43.124325 systemd[1]: Finished ensure-sysext.service. Dec 13 14:28:43.133659 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:43.134245 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:28:43.136424 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:28:43.147249 systemd[1]: Starting modprobe@drm.service... Dec 13 14:28:43.160715 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:28:43.172054 systemd[1]: Starting modprobe@loop.service... Dec 13 14:28:43.181384 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 14:28:43.187196 enable-oslogin[1262]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 14:28:43.187922 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:28:43.188029 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:28:43.191349 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:28:43.202958 systemd[1]: Starting systemd-update-done.service... Dec 13 14:28:43.210039 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:28:43.210117 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:43.211637 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:28:43.211957 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:28:43.220825 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:28:43.221125 systemd[1]: Finished modprobe@drm.service. Dec 13 14:28:43.230498 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:28:43.230745 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:28:43.239343 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:28:43.239585 systemd[1]: Finished modprobe@loop.service. Dec 13 14:28:43.248506 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 14:28:43.248988 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 14:28:43.257654 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:28:43.267561 systemd[1]: Finished systemd-update-done.service. Dec 13 14:28:43.276444 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:28:43.276534 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:28:43.335282 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:28:43.336993 systemd-timesyncd[1238]: Contacted time server 169.254.169.254:123 (169.254.169.254). Dec 13 14:28:43.337696 systemd-timesyncd[1238]: Initial clock synchronization to Fri 2024-12-13 14:28:43.174291 UTC. Dec 13 14:28:43.344577 systemd[1]: Reached target time-set.target. Dec 13 14:28:43.345121 systemd-resolved[1232]: Positive Trust Anchors: Dec 13 14:28:43.345147 systemd-resolved[1232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:28:43.345200 systemd-resolved[1232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:28:43.354002 systemd-resolved[1232]: Defaulting to hostname 'linux'. Dec 13 14:28:43.356125 systemd[1]: Started systemd-resolved.service. Dec 13 14:28:43.364061 systemd[1]: Reached target network.target. Dec 13 14:28:43.371996 systemd[1]: Reached target network-online.target. Dec 13 14:28:43.380004 systemd[1]: Reached target nss-lookup.target. Dec 13 14:28:43.388011 systemd[1]: Reached target sysinit.target. Dec 13 14:28:43.396160 systemd[1]: Started motdgen.path. Dec 13 14:28:43.403100 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:28:43.413324 systemd[1]: Started logrotate.timer. Dec 13 14:28:43.420179 systemd[1]: Started mdadm.timer. Dec 13 14:28:43.427015 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:28:43.435007 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:28:43.435060 systemd[1]: Reached target paths.target. Dec 13 14:28:43.441989 systemd[1]: Reached target timers.target. Dec 13 14:28:43.449544 systemd[1]: Listening on dbus.socket. Dec 13 14:28:43.458917 systemd[1]: Starting docker.socket... Dec 13 14:28:43.468164 systemd[1]: Listening on sshd.socket. Dec 13 14:28:43.475096 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:28:43.475769 systemd[1]: Listening on docker.socket. Dec 13 14:28:43.484048 systemd[1]: Reached target sockets.target. Dec 13 14:28:43.491983 systemd[1]: Reached target basic.target. Dec 13 14:28:43.499215 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:28:43.499322 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:28:43.499370 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:28:43.501381 systemd[1]: Starting containerd.service... Dec 13 14:28:43.511037 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:28:43.522942 systemd[1]: Starting dbus.service... Dec 13 14:28:43.531848 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:28:43.541199 systemd[1]: Starting extend-filesystems.service... Dec 13 14:28:43.545193 jq[1276]: false Dec 13 14:28:43.549026 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:28:43.551617 systemd[1]: Starting kubelet.service... Dec 13 14:28:43.560813 systemd[1]: Starting motdgen.service... Dec 13 14:28:43.571343 systemd[1]: Starting oem-gce.service... Dec 13 14:28:43.580320 systemd[1]: Starting prepare-helm.service... Dec 13 14:28:43.589104 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:28:43.598031 systemd[1]: Starting sshd-keygen.service... Dec 13 14:28:43.608848 systemd[1]: Starting systemd-logind.service... Dec 13 14:28:43.617025 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:28:43.617153 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 13 14:28:43.619407 systemd[1]: Starting update-engine.service... Dec 13 14:28:43.628068 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:28:43.636652 jq[1303]: true Dec 13 14:28:43.641007 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:28:43.641438 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:28:43.652125 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:28:43.652551 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:28:43.680356 systemd[1]: Created slice system-sshd.slice. Dec 13 14:28:43.684039 extend-filesystems[1278]: Found loop1 Dec 13 14:28:43.684039 extend-filesystems[1278]: Found sda Dec 13 14:28:43.684039 extend-filesystems[1278]: Found sda1 Dec 13 14:28:43.684039 extend-filesystems[1278]: Found sda2 Dec 13 14:28:43.684039 extend-filesystems[1278]: Found sda3 Dec 13 14:28:43.684039 extend-filesystems[1278]: Found usr Dec 13 14:28:43.684039 extend-filesystems[1278]: Found sda4 Dec 13 14:28:43.684039 extend-filesystems[1278]: Found sda6 Dec 13 14:28:43.684039 extend-filesystems[1278]: Found sda7 Dec 13 14:28:43.684039 extend-filesystems[1278]: Found sda9 Dec 13 14:28:43.684039 extend-filesystems[1278]: Checking size of /dev/sda9 Dec 13 14:28:43.790123 mkfs.ext4[1314]: mke2fs 1.46.5 (30-Dec-2021) Dec 13 14:28:43.790123 mkfs.ext4[1314]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Dec 13 14:28:43.790123 mkfs.ext4[1314]: Creating filesystem with 262144 4k blocks and 65536 inodes Dec 13 14:28:43.790123 mkfs.ext4[1314]: Filesystem UUID: 2a275ee6-ee59-4200-950a-87d1a43f8aee Dec 13 14:28:43.790123 mkfs.ext4[1314]: Superblock backups stored on blocks: Dec 13 14:28:43.790123 mkfs.ext4[1314]: 32768, 98304, 163840, 229376 Dec 13 14:28:43.790123 mkfs.ext4[1314]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 14:28:43.790123 mkfs.ext4[1314]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 14:28:43.790123 mkfs.ext4[1314]: Creating journal (8192 blocks): done Dec 13 14:28:43.790123 mkfs.ext4[1314]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 14:28:43.710486 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:28:43.790999 extend-filesystems[1278]: Resized partition /dev/sda9 Dec 13 14:28:43.799122 jq[1312]: true Dec 13 14:28:43.710925 systemd[1]: Finished motdgen.service. Dec 13 14:28:43.799499 extend-filesystems[1334]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:28:43.833762 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Dec 13 14:28:43.838017 tar[1309]: linux-amd64/helm Dec 13 14:28:43.859346 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 14:28:43.859883 umount[1332]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Dec 13 14:28:43.894262 update_engine[1302]: I1213 14:28:43.893887 1302 main.cc:92] Flatcar Update Engine starting Dec 13 14:28:43.898207 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Dec 13 14:28:43.913322 dbus-daemon[1275]: [system] SELinux support is enabled Dec 13 14:28:43.917083 dbus-daemon[1275]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1073 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:28:43.931928 update_engine[1302]: I1213 14:28:43.926184 1302 update_check_scheduler.cc:74] Next update check in 5m22s Dec 13 14:28:43.913678 systemd[1]: Started dbus.service. Dec 13 14:28:43.924947 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:28:43.925003 systemd[1]: Reached target system-config.target. Dec 13 14:28:43.933509 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:28:43.933554 systemd[1]: Reached target user-config.target. Dec 13 14:28:43.942996 extend-filesystems[1334]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 14:28:43.942996 extend-filesystems[1334]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 13 14:28:43.942996 extend-filesystems[1334]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Dec 13 14:28:44.018978 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:28:43.942383 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:28:43.997789 dbus-daemon[1275]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:28:44.019356 extend-filesystems[1278]: Resized filesystem in /dev/sda9 Dec 13 14:28:44.045585 bash[1352]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:28:43.961520 systemd[1]: Finished extend-filesystems.service. Dec 13 14:28:43.984999 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:28:43.996771 systemd[1]: Started update-engine.service. Dec 13 14:28:44.043472 systemd[1]: Started locksmithd.service. Dec 13 14:28:44.058063 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:28:44.139013 systemd-logind[1301]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:28:44.139061 systemd-logind[1301]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 13 14:28:44.139097 systemd-logind[1301]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:28:44.139436 systemd-logind[1301]: New seat seat0. Dec 13 14:28:44.143127 systemd[1]: Started systemd-logind.service. Dec 13 14:28:44.145325 env[1313]: time="2024-12-13T14:28:44.145210255Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:28:44.199214 coreos-metadata[1274]: Dec 13 14:28:44.199 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 13 14:28:44.218272 coreos-metadata[1274]: Dec 13 14:28:44.218 INFO Fetch failed with 404: resource not found Dec 13 14:28:44.218272 coreos-metadata[1274]: Dec 13 14:28:44.218 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 13 14:28:44.219479 coreos-metadata[1274]: Dec 13 14:28:44.219 INFO Fetch successful Dec 13 14:28:44.219479 coreos-metadata[1274]: Dec 13 14:28:44.219 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 13 14:28:44.227921 coreos-metadata[1274]: Dec 13 14:28:44.225 INFO Fetch failed with 404: resource not found Dec 13 14:28:44.227921 coreos-metadata[1274]: Dec 13 14:28:44.225 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 13 14:28:44.229015 coreos-metadata[1274]: Dec 13 14:28:44.228 INFO Fetch failed with 404: resource not found Dec 13 14:28:44.229015 coreos-metadata[1274]: Dec 13 14:28:44.228 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 13 14:28:44.230677 coreos-metadata[1274]: Dec 13 14:28:44.230 INFO Fetch successful Dec 13 14:28:44.236047 unknown[1274]: wrote ssh authorized keys file for user: core Dec 13 14:28:44.289334 update-ssh-keys[1367]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:28:44.291608 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:28:44.371023 env[1313]: time="2024-12-13T14:28:44.370916045Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:28:44.376090 env[1313]: time="2024-12-13T14:28:44.376042951Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:28:44.385333 env[1313]: time="2024-12-13T14:28:44.385274809Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:28:44.386966 env[1313]: time="2024-12-13T14:28:44.386922910Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:28:44.387635 env[1313]: time="2024-12-13T14:28:44.387593967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:28:44.387846 env[1313]: time="2024-12-13T14:28:44.387820241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:28:44.390058 env[1313]: time="2024-12-13T14:28:44.389961786Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:28:44.390237 env[1313]: time="2024-12-13T14:28:44.390206584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:28:44.390496 env[1313]: time="2024-12-13T14:28:44.390468414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:28:44.394048 env[1313]: time="2024-12-13T14:28:44.394015724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:28:44.394559 env[1313]: time="2024-12-13T14:28:44.394518882Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:28:44.395933 env[1313]: time="2024-12-13T14:28:44.395897441Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:28:44.396157 env[1313]: time="2024-12-13T14:28:44.396129757Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:28:44.396286 env[1313]: time="2024-12-13T14:28:44.396264788Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:28:44.409769 env[1313]: time="2024-12-13T14:28:44.409667067Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:28:44.410107 env[1313]: time="2024-12-13T14:28:44.410066242Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:28:44.410244 env[1313]: time="2024-12-13T14:28:44.410220985Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:28:44.410496 env[1313]: time="2024-12-13T14:28:44.410455730Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:28:44.410704 env[1313]: time="2024-12-13T14:28:44.410679595Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:28:44.410863 env[1313]: time="2024-12-13T14:28:44.410827260Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:28:44.410998 env[1313]: time="2024-12-13T14:28:44.410974575Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:28:44.411150 env[1313]: time="2024-12-13T14:28:44.411127991Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:28:44.411291 env[1313]: time="2024-12-13T14:28:44.411269305Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:28:44.411445 env[1313]: time="2024-12-13T14:28:44.411422846Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:28:44.411579 env[1313]: time="2024-12-13T14:28:44.411558505Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:28:44.411715 env[1313]: time="2024-12-13T14:28:44.411695453Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:28:44.419010 env[1313]: time="2024-12-13T14:28:44.418967234Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:28:44.419377 env[1313]: time="2024-12-13T14:28:44.419353117Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:28:44.420547 env[1313]: time="2024-12-13T14:28:44.420511558Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:28:44.425953 env[1313]: time="2024-12-13T14:28:44.425888307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:28:44.427969 env[1313]: time="2024-12-13T14:28:44.427918804Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:28:44.428277 env[1313]: time="2024-12-13T14:28:44.428241563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:28:44.428752 env[1313]: time="2024-12-13T14:28:44.428707704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:28:44.431630 env[1313]: time="2024-12-13T14:28:44.431584066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:28:44.431792 env[1313]: time="2024-12-13T14:28:44.431765991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:28:44.431982 env[1313]: time="2024-12-13T14:28:44.431957629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:28:44.432124 env[1313]: time="2024-12-13T14:28:44.432102089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:28:44.432258 env[1313]: time="2024-12-13T14:28:44.432235506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:28:44.432423 env[1313]: time="2024-12-13T14:28:44.432397440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:28:44.432644 env[1313]: time="2024-12-13T14:28:44.432596878Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:28:44.433159 env[1313]: time="2024-12-13T14:28:44.433131348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:28:44.433971 env[1313]: time="2024-12-13T14:28:44.433927461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:28:44.434960 env[1313]: time="2024-12-13T14:28:44.434904522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:28:44.435128 env[1313]: time="2024-12-13T14:28:44.435104049Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:28:44.435273 env[1313]: time="2024-12-13T14:28:44.435245944Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:28:44.435421 env[1313]: time="2024-12-13T14:28:44.435397900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:28:44.435574 env[1313]: time="2024-12-13T14:28:44.435549179Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:28:44.435765 env[1313]: time="2024-12-13T14:28:44.435721854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:28:44.437648 env[1313]: time="2024-12-13T14:28:44.437524217Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:28:44.440731 env[1313]: time="2024-12-13T14:28:44.440653938Z" level=info msg="Connect containerd service" Dec 13 14:28:44.443037 env[1313]: time="2024-12-13T14:28:44.443002111Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:28:44.450315 env[1313]: time="2024-12-13T14:28:44.450265603Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:28:44.450635 env[1313]: time="2024-12-13T14:28:44.450550904Z" level=info msg="Start subscribing containerd event" Dec 13 14:28:44.454005 env[1313]: time="2024-12-13T14:28:44.453951930Z" level=info msg="Start recovering state" Dec 13 14:28:44.457125 env[1313]: time="2024-12-13T14:28:44.457096124Z" level=info msg="Start event monitor" Dec 13 14:28:44.459756 env[1313]: time="2024-12-13T14:28:44.459722596Z" level=info msg="Start snapshots syncer" Dec 13 14:28:44.459935 env[1313]: time="2024-12-13T14:28:44.459911855Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:28:44.460062 env[1313]: time="2024-12-13T14:28:44.460042986Z" level=info msg="Start streaming server" Dec 13 14:28:44.461177 env[1313]: time="2024-12-13T14:28:44.461149148Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:28:44.461399 env[1313]: time="2024-12-13T14:28:44.461376923Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:28:44.461643 env[1313]: time="2024-12-13T14:28:44.461622905Z" level=info msg="containerd successfully booted in 0.362237s" Dec 13 14:28:44.461840 systemd[1]: Started containerd.service. Dec 13 14:28:44.485266 dbus-daemon[1275]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:28:44.485516 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:28:44.488121 dbus-daemon[1275]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1365 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:28:44.500499 systemd[1]: Starting polkit.service... Dec 13 14:28:44.613661 polkitd[1376]: Started polkitd version 121 Dec 13 14:28:44.638973 polkitd[1376]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:28:44.639319 polkitd[1376]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:28:44.647220 polkitd[1376]: Finished loading, compiling and executing 2 rules Dec 13 14:28:44.648137 dbus-daemon[1275]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:28:44.648400 systemd[1]: Started polkit.service. Dec 13 14:28:44.649153 polkitd[1376]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:28:44.686962 systemd-hostnamed[1365]: Hostname set to (transient) Dec 13 14:28:44.690906 systemd-resolved[1232]: System hostname changed to 'ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal'. Dec 13 14:28:45.526920 tar[1309]: linux-amd64/LICENSE Dec 13 14:28:45.527680 tar[1309]: linux-amd64/README.md Dec 13 14:28:45.545930 systemd[1]: Finished prepare-helm.service. Dec 13 14:28:46.009505 systemd[1]: Started kubelet.service. Dec 13 14:28:46.779559 locksmithd[1363]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:28:47.515636 kubelet[1391]: E1213 14:28:47.515531 1391 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:28:47.522022 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:28:47.522300 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:28:48.985650 sshd_keygen[1319]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:28:49.028178 systemd[1]: Finished sshd-keygen.service. Dec 13 14:28:49.041433 systemd[1]: Starting issuegen.service... Dec 13 14:28:49.049268 systemd[1]: Started sshd@0-10.128.0.25:22-139.178.68.195:57786.service. Dec 13 14:28:49.070201 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:28:49.070670 systemd[1]: Finished issuegen.service. Dec 13 14:28:49.082010 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:28:49.096203 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:28:49.107456 systemd[1]: Started getty@tty1.service. Dec 13 14:28:49.117302 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:28:49.127458 systemd[1]: Reached target getty.target. Dec 13 14:28:49.394513 sshd[1415]: Accepted publickey for core from 139.178.68.195 port 57786 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:28:49.398324 sshd[1415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:49.438787 systemd[1]: Created slice user-500.slice. Dec 13 14:28:49.448344 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:28:49.467934 systemd-logind[1301]: New session 1 of user core. Dec 13 14:28:49.493279 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:28:49.504822 systemd[1]: Starting user@500.service... Dec 13 14:28:49.546577 (systemd)[1427]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:49.729344 systemd[1427]: Queued start job for default target default.target. Dec 13 14:28:49.729780 systemd[1427]: Reached target paths.target. Dec 13 14:28:49.729809 systemd[1427]: Reached target sockets.target. Dec 13 14:28:49.729832 systemd[1427]: Reached target timers.target. Dec 13 14:28:49.731042 systemd[1427]: Reached target basic.target. Dec 13 14:28:49.731281 systemd[1]: Started user@500.service. Dec 13 14:28:49.731600 systemd[1427]: Reached target default.target. Dec 13 14:28:49.731683 systemd[1427]: Startup finished in 167ms. Dec 13 14:28:49.740053 systemd[1]: Started session-1.scope. Dec 13 14:28:49.971143 systemd[1]: Started sshd@1-10.128.0.25:22-139.178.68.195:36338.service. Dec 13 14:28:50.277883 sshd[1436]: Accepted publickey for core from 139.178.68.195 port 36338 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:28:50.280414 sshd[1436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:50.289642 systemd[1]: Started session-2.scope. Dec 13 14:28:50.291121 systemd-logind[1301]: New session 2 of user core. Dec 13 14:28:50.497665 sshd[1436]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:50.502733 systemd[1]: sshd@1-10.128.0.25:22-139.178.68.195:36338.service: Deactivated successfully. Dec 13 14:28:50.504180 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:28:50.506912 systemd-logind[1301]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:28:50.513047 systemd-logind[1301]: Removed session 2. Dec 13 14:28:50.538716 systemd[1]: Started sshd@2-10.128.0.25:22-139.178.68.195:36346.service. Dec 13 14:28:50.717715 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Dec 13 14:28:50.849671 sshd[1443]: Accepted publickey for core from 139.178.68.195 port 36346 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:28:50.851462 sshd[1443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:50.864432 systemd[1]: Started session-3.scope. Dec 13 14:28:50.865687 systemd-logind[1301]: New session 3 of user core. Dec 13 14:28:51.068908 sshd[1443]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:51.074919 systemd-logind[1301]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:28:51.075259 systemd[1]: sshd@2-10.128.0.25:22-139.178.68.195:36346.service: Deactivated successfully. Dec 13 14:28:51.076726 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:28:51.077828 systemd-logind[1301]: Removed session 3. Dec 13 14:28:52.942892 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 14:28:52.956970 systemd-nspawn[1453]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Dec 13 14:28:52.956970 systemd-nspawn[1453]: Press ^] three times within 1s to kill container. Dec 13 14:28:52.970892 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:28:53.048254 systemd[1]: Started oem-gce.service. Dec 13 14:28:53.055516 systemd[1]: Reached target multi-user.target. Dec 13 14:28:53.066662 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:28:53.079573 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:28:53.080008 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:28:53.092592 systemd[1]: Startup finished in 8.928s (kernel) + 17.398s (userspace) = 26.326s. Dec 13 14:28:53.110312 systemd-nspawn[1453]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 13 14:28:53.110444 systemd-nspawn[1453]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 13 14:28:53.110599 systemd-nspawn[1453]: + /usr/bin/google_instance_setup Dec 13 14:28:53.693691 instance-setup[1460]: INFO Running google_set_multiqueue. Dec 13 14:28:53.707735 instance-setup[1460]: INFO Set channels for eth0 to 2. Dec 13 14:28:53.711451 instance-setup[1460]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 13 14:28:53.713211 instance-setup[1460]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 13 14:28:53.713609 instance-setup[1460]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 13 14:28:53.715396 instance-setup[1460]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 13 14:28:53.715786 instance-setup[1460]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 13 14:28:53.717148 instance-setup[1460]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 13 14:28:53.717636 instance-setup[1460]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 13 14:28:53.719186 instance-setup[1460]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 13 14:28:53.731231 instance-setup[1460]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 13 14:28:53.731416 instance-setup[1460]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 13 14:28:53.774061 systemd-nspawn[1453]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 13 14:28:54.113275 startup-script[1491]: INFO Starting startup scripts. Dec 13 14:28:54.128231 startup-script[1491]: INFO No startup scripts found in metadata. Dec 13 14:28:54.128398 startup-script[1491]: INFO Finished running startup scripts. Dec 13 14:28:54.164911 systemd-nspawn[1453]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 13 14:28:54.164911 systemd-nspawn[1453]: + daemon_pids=() Dec 13 14:28:54.165707 systemd-nspawn[1453]: + for d in accounts clock_skew network Dec 13 14:28:54.165707 systemd-nspawn[1453]: + daemon_pids+=($!) Dec 13 14:28:54.165707 systemd-nspawn[1453]: + for d in accounts clock_skew network Dec 13 14:28:54.165876 systemd-nspawn[1453]: + daemon_pids+=($!) Dec 13 14:28:54.165876 systemd-nspawn[1453]: + for d in accounts clock_skew network Dec 13 14:28:54.166200 systemd-nspawn[1453]: + daemon_pids+=($!) Dec 13 14:28:54.166344 systemd-nspawn[1453]: + NOTIFY_SOCKET=/run/systemd/notify Dec 13 14:28:54.166409 systemd-nspawn[1453]: + /usr/bin/systemd-notify --ready Dec 13 14:28:54.166584 systemd-nspawn[1453]: + /usr/bin/google_clock_skew_daemon Dec 13 14:28:54.167204 systemd-nspawn[1453]: + /usr/bin/google_accounts_daemon Dec 13 14:28:54.167426 systemd-nspawn[1453]: + /usr/bin/google_network_daemon Dec 13 14:28:54.225641 systemd-nspawn[1453]: + wait -n 36 37 38 Dec 13 14:28:54.731129 google-clock-skew[1495]: INFO Starting Google Clock Skew daemon. Dec 13 14:28:54.757348 google-clock-skew[1495]: INFO Clock drift token has changed: 0. Dec 13 14:28:54.772454 systemd-nspawn[1453]: hwclock: Cannot access the Hardware Clock via any known method. Dec 13 14:28:54.772806 systemd-nspawn[1453]: hwclock: Use the --verbose option to see the details of our search for an access method. Dec 13 14:28:54.773868 google-clock-skew[1495]: WARNING Failed to sync system time with hardware clock. Dec 13 14:28:54.819067 google-networking[1496]: INFO Starting Google Networking daemon. Dec 13 14:28:54.935481 groupadd[1506]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 13 14:28:54.938812 groupadd[1506]: group added to /etc/gshadow: name=google-sudoers Dec 13 14:28:54.942631 groupadd[1506]: new group: name=google-sudoers, GID=1000 Dec 13 14:28:54.956129 google-accounts[1494]: INFO Starting Google Accounts daemon. Dec 13 14:28:54.982530 google-accounts[1494]: WARNING OS Login not installed. Dec 13 14:28:54.983815 google-accounts[1494]: INFO Creating a new user account for 0. Dec 13 14:28:54.989001 systemd-nspawn[1453]: useradd: invalid user name '0': use --badname to ignore Dec 13 14:28:54.989687 google-accounts[1494]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 13 14:28:57.773754 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:28:57.774141 systemd[1]: Stopped kubelet.service. Dec 13 14:28:57.776789 systemd[1]: Starting kubelet.service... Dec 13 14:28:58.052527 systemd[1]: Started kubelet.service. Dec 13 14:28:58.128291 kubelet[1524]: E1213 14:28:58.128228 1524 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:28:58.133079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:28:58.133395 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:29:01.068541 systemd[1]: Started sshd@3-10.128.0.25:22-139.178.68.195:46390.service. Dec 13 14:29:01.356501 sshd[1532]: Accepted publickey for core from 139.178.68.195 port 46390 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:29:01.358793 sshd[1532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:01.364937 systemd-logind[1301]: New session 4 of user core. Dec 13 14:29:01.366303 systemd[1]: Started session-4.scope. Dec 13 14:29:01.573832 sshd[1532]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:01.578668 systemd[1]: sshd@3-10.128.0.25:22-139.178.68.195:46390.service: Deactivated successfully. Dec 13 14:29:01.580176 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:29:01.582263 systemd-logind[1301]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:29:01.583757 systemd-logind[1301]: Removed session 4. Dec 13 14:29:01.618103 systemd[1]: Started sshd@4-10.128.0.25:22-139.178.68.195:46392.service. Dec 13 14:29:01.904130 sshd[1539]: Accepted publickey for core from 139.178.68.195 port 46392 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:29:01.906409 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:01.914325 systemd[1]: Started session-5.scope. Dec 13 14:29:01.914666 systemd-logind[1301]: New session 5 of user core. Dec 13 14:29:02.114510 sshd[1539]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:02.119523 systemd[1]: sshd@4-10.128.0.25:22-139.178.68.195:46392.service: Deactivated successfully. Dec 13 14:29:02.121197 systemd-logind[1301]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:29:02.121330 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:29:02.123676 systemd-logind[1301]: Removed session 5. Dec 13 14:29:02.159096 systemd[1]: Started sshd@5-10.128.0.25:22-139.178.68.195:46396.service. Dec 13 14:29:02.445684 sshd[1546]: Accepted publickey for core from 139.178.68.195 port 46396 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:29:02.447617 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:02.455338 systemd[1]: Started session-6.scope. Dec 13 14:29:02.455674 systemd-logind[1301]: New session 6 of user core. Dec 13 14:29:02.662679 sshd[1546]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:02.667347 systemd[1]: sshd@5-10.128.0.25:22-139.178.68.195:46396.service: Deactivated successfully. Dec 13 14:29:02.668799 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:29:02.670837 systemd-logind[1301]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:29:02.672505 systemd-logind[1301]: Removed session 6. Dec 13 14:29:02.706997 systemd[1]: Started sshd@6-10.128.0.25:22-139.178.68.195:46412.service. Dec 13 14:29:02.992522 sshd[1553]: Accepted publickey for core from 139.178.68.195 port 46412 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:29:02.994591 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:03.002588 systemd[1]: Started session-7.scope. Dec 13 14:29:03.003003 systemd-logind[1301]: New session 7 of user core. Dec 13 14:29:03.189287 sudo[1557]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:29:03.189810 sudo[1557]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:29:03.227451 systemd[1]: Starting docker.service... Dec 13 14:29:03.283771 env[1567]: time="2024-12-13T14:29:03.283617453Z" level=info msg="Starting up" Dec 13 14:29:03.286166 env[1567]: time="2024-12-13T14:29:03.286121844Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:29:03.286166 env[1567]: time="2024-12-13T14:29:03.286160600Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:29:03.286381 env[1567]: time="2024-12-13T14:29:03.286194414Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:29:03.286381 env[1567]: time="2024-12-13T14:29:03.286219880Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:29:03.289951 env[1567]: time="2024-12-13T14:29:03.289919718Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:29:03.290136 env[1567]: time="2024-12-13T14:29:03.290112339Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:29:03.290246 env[1567]: time="2024-12-13T14:29:03.290221524Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:29:03.290343 env[1567]: time="2024-12-13T14:29:03.290325220Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:29:03.513711 env[1567]: time="2024-12-13T14:29:03.513650164Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 14:29:03.513711 env[1567]: time="2024-12-13T14:29:03.513688224Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 14:29:03.514154 env[1567]: time="2024-12-13T14:29:03.514043454Z" level=info msg="Loading containers: start." Dec 13 14:29:03.683922 kernel: Initializing XFRM netlink socket Dec 13 14:29:03.728848 env[1567]: time="2024-12-13T14:29:03.728777620Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:29:03.811290 systemd-networkd[1073]: docker0: Link UP Dec 13 14:29:03.832828 env[1567]: time="2024-12-13T14:29:03.832764669Z" level=info msg="Loading containers: done." Dec 13 14:29:03.852617 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1610851118-merged.mount: Deactivated successfully. Dec 13 14:29:03.856799 env[1567]: time="2024-12-13T14:29:03.856729109Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:29:03.857109 env[1567]: time="2024-12-13T14:29:03.857058258Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:29:03.857250 env[1567]: time="2024-12-13T14:29:03.857215387Z" level=info msg="Daemon has completed initialization" Dec 13 14:29:03.882096 systemd[1]: Started docker.service. Dec 13 14:29:03.891224 env[1567]: time="2024-12-13T14:29:03.891147277Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:29:05.120973 env[1313]: time="2024-12-13T14:29:05.120901436Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:29:05.606618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3057809138.mount: Deactivated successfully. Dec 13 14:29:07.685080 env[1313]: time="2024-12-13T14:29:07.685014379Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:07.687655 env[1313]: time="2024-12-13T14:29:07.687605178Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:07.690165 env[1313]: time="2024-12-13T14:29:07.690119145Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:07.692457 env[1313]: time="2024-12-13T14:29:07.692415897Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:07.693527 env[1313]: time="2024-12-13T14:29:07.693477327Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 14:29:07.710777 env[1313]: time="2024-12-13T14:29:07.710724211Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:29:08.308784 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:29:08.309233 systemd[1]: Stopped kubelet.service. Dec 13 14:29:08.312928 systemd[1]: Starting kubelet.service... Dec 13 14:29:08.657419 systemd[1]: Started kubelet.service. Dec 13 14:29:08.766184 kubelet[1708]: E1213 14:29:08.766088 1708 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:29:08.769675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:29:08.769992 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:29:09.983220 env[1313]: time="2024-12-13T14:29:09.983138095Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:09.985918 env[1313]: time="2024-12-13T14:29:09.985845822Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:09.988173 env[1313]: time="2024-12-13T14:29:09.988130997Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:09.990276 env[1313]: time="2024-12-13T14:29:09.990232800Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:09.991469 env[1313]: time="2024-12-13T14:29:09.991415876Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 14:29:10.009409 env[1313]: time="2024-12-13T14:29:10.009344064Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:29:11.210041 env[1313]: time="2024-12-13T14:29:11.209941252Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:11.213756 env[1313]: time="2024-12-13T14:29:11.213701178Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:11.216744 env[1313]: time="2024-12-13T14:29:11.216695999Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:11.220075 env[1313]: time="2024-12-13T14:29:11.220031648Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:11.221827 env[1313]: time="2024-12-13T14:29:11.221772116Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 14:29:11.240697 env[1313]: time="2024-12-13T14:29:11.240635623Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:29:12.475402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1545029262.mount: Deactivated successfully. Dec 13 14:29:13.172563 env[1313]: time="2024-12-13T14:29:13.172487104Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:13.176263 env[1313]: time="2024-12-13T14:29:13.176196094Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:13.184788 env[1313]: time="2024-12-13T14:29:13.184747832Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:13.187674 env[1313]: time="2024-12-13T14:29:13.187606880Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:13.188737 env[1313]: time="2024-12-13T14:29:13.188682004Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:29:13.206610 env[1313]: time="2024-12-13T14:29:13.206534890Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:29:13.646121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360383525.mount: Deactivated successfully. Dec 13 14:29:14.698963 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:29:14.909557 env[1313]: time="2024-12-13T14:29:14.909471924Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:14.913367 env[1313]: time="2024-12-13T14:29:14.913304085Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:14.916397 env[1313]: time="2024-12-13T14:29:14.916356935Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:14.919827 env[1313]: time="2024-12-13T14:29:14.919787247Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:14.921605 env[1313]: time="2024-12-13T14:29:14.921538612Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:29:14.939747 env[1313]: time="2024-12-13T14:29:14.939696722Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:29:15.355535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3940985361.mount: Deactivated successfully. Dec 13 14:29:15.361667 env[1313]: time="2024-12-13T14:29:15.361605524Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:15.364542 env[1313]: time="2024-12-13T14:29:15.364501480Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:15.367466 env[1313]: time="2024-12-13T14:29:15.367427986Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:15.369498 env[1313]: time="2024-12-13T14:29:15.369454974Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:15.370289 env[1313]: time="2024-12-13T14:29:15.370238471Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 14:29:15.387754 env[1313]: time="2024-12-13T14:29:15.387689830Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:29:15.745713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3233689162.mount: Deactivated successfully. Dec 13 14:29:18.494361 env[1313]: time="2024-12-13T14:29:18.494279653Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:18.497328 env[1313]: time="2024-12-13T14:29:18.497267060Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:18.501808 env[1313]: time="2024-12-13T14:29:18.501761944Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:18.504564 env[1313]: time="2024-12-13T14:29:18.504513849Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:18.505789 env[1313]: time="2024-12-13T14:29:18.505733855Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 14:29:18.796227 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:29:18.797511 systemd[1]: Stopped kubelet.service. Dec 13 14:29:18.800454 systemd[1]: Starting kubelet.service... Dec 13 14:29:19.054444 systemd[1]: Started kubelet.service. Dec 13 14:29:19.187871 kubelet[1782]: E1213 14:29:19.187781 1782 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:29:19.190597 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:29:19.190980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:29:22.516322 systemd[1]: Stopped kubelet.service. Dec 13 14:29:22.520539 systemd[1]: Starting kubelet.service... Dec 13 14:29:22.556880 systemd[1]: Reloading. Dec 13 14:29:22.696162 /usr/lib/systemd/system-generators/torcx-generator[1851]: time="2024-12-13T14:29:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:29:22.696219 /usr/lib/systemd/system-generators/torcx-generator[1851]: time="2024-12-13T14:29:22Z" level=info msg="torcx already run" Dec 13 14:29:22.847507 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:29:22.847536 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:29:22.872026 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:29:22.999978 systemd[1]: Started kubelet.service. Dec 13 14:29:23.005046 systemd[1]: Stopping kubelet.service... Dec 13 14:29:23.006205 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:29:23.006624 systemd[1]: Stopped kubelet.service. Dec 13 14:29:23.010714 systemd[1]: Starting kubelet.service... Dec 13 14:29:23.231265 systemd[1]: Started kubelet.service. Dec 13 14:29:23.313810 kubelet[1916]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:29:23.313810 kubelet[1916]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:29:23.313810 kubelet[1916]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:29:23.314603 kubelet[1916]: I1213 14:29:23.313920 1916 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:29:23.538193 kubelet[1916]: I1213 14:29:23.538038 1916 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:29:23.538193 kubelet[1916]: I1213 14:29:23.538092 1916 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:29:23.539438 kubelet[1916]: I1213 14:29:23.539397 1916 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:29:23.585608 kubelet[1916]: I1213 14:29:23.585554 1916 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:29:23.586771 kubelet[1916]: E1213 14:29:23.586642 1916 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.25:6443: connect: connection refused Dec 13 14:29:23.602054 kubelet[1916]: I1213 14:29:23.602020 1916 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:29:23.603842 kubelet[1916]: I1213 14:29:23.603793 1916 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:29:23.604175 kubelet[1916]: I1213 14:29:23.604135 1916 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:29:23.605023 kubelet[1916]: I1213 14:29:23.604987 1916 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:29:23.605023 kubelet[1916]: I1213 14:29:23.605025 1916 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:29:23.605227 kubelet[1916]: I1213 14:29:23.605198 1916 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:29:23.605404 kubelet[1916]: I1213 14:29:23.605386 1916 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:29:23.605504 kubelet[1916]: I1213 14:29:23.605418 1916 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:29:23.605504 kubelet[1916]: I1213 14:29:23.605478 1916 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:29:23.605504 kubelet[1916]: I1213 14:29:23.605507 1916 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:29:23.610203 kubelet[1916]: W1213 14:29:23.610138 1916 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Dec 13 14:29:23.610361 kubelet[1916]: E1213 14:29:23.610232 1916 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Dec 13 14:29:23.610442 kubelet[1916]: I1213 14:29:23.610359 1916 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:29:23.617373 kubelet[1916]: W1213 14:29:23.617294 1916 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Dec 13 14:29:23.617373 kubelet[1916]: E1213 14:29:23.617355 1916 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Dec 13 14:29:23.618821 kubelet[1916]: I1213 14:29:23.618773 1916 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:29:23.618946 kubelet[1916]: W1213 14:29:23.618888 1916 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:29:23.619765 kubelet[1916]: I1213 14:29:23.619739 1916 server.go:1256] "Started kubelet" Dec 13 14:29:23.620017 kubelet[1916]: I1213 14:29:23.619977 1916 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:29:23.621579 kubelet[1916]: I1213 14:29:23.621130 1916 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:29:23.633660 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:29:23.634663 kubelet[1916]: I1213 14:29:23.633893 1916 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:29:23.635288 kubelet[1916]: I1213 14:29:23.635264 1916 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:29:23.635698 kubelet[1916]: I1213 14:29:23.635676 1916 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:29:23.642899 kubelet[1916]: I1213 14:29:23.642174 1916 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:29:23.645834 kubelet[1916]: I1213 14:29:23.645805 1916 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:29:23.646091 kubelet[1916]: I1213 14:29:23.646069 1916 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:29:23.649387 kubelet[1916]: E1213 14:29:23.649353 1916 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.25:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.25:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal.1810c2e9417fd001 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal,UID:ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 14:29:23.619704833 +0000 UTC m=+0.370239028,LastTimestamp:2024-12-13 14:29:23.619704833 +0000 UTC m=+0.370239028,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal,}" Dec 13 14:29:23.650938 kubelet[1916]: W1213 14:29:23.650881 1916 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Dec 13 14:29:23.651116 kubelet[1916]: E1213 14:29:23.651089 1916 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Dec 13 14:29:23.651360 kubelet[1916]: E1213 14:29:23.651342 1916 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.25:6443: connect: connection refused" interval="200ms" Dec 13 14:29:23.653113 kubelet[1916]: I1213 14:29:23.653087 1916 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:29:23.653919 kubelet[1916]: I1213 14:29:23.653892 1916 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:29:23.658055 kubelet[1916]: I1213 14:29:23.658031 1916 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:29:23.671649 kubelet[1916]: I1213 14:29:23.671620 1916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:29:23.673581 kubelet[1916]: I1213 14:29:23.673559 1916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:29:23.673745 kubelet[1916]: I1213 14:29:23.673727 1916 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:29:23.673898 kubelet[1916]: I1213 14:29:23.673883 1916 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:29:23.674084 kubelet[1916]: E1213 14:29:23.674069 1916 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:29:23.683727 kubelet[1916]: E1213 14:29:23.683700 1916 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:29:23.689136 kubelet[1916]: W1213 14:29:23.688921 1916 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Dec 13 14:29:23.689402 kubelet[1916]: E1213 14:29:23.689367 1916 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Dec 13 14:29:23.714570 kubelet[1916]: I1213 14:29:23.714480 1916 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:29:23.714768 kubelet[1916]: I1213 14:29:23.714624 1916 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:29:23.714768 kubelet[1916]: I1213 14:29:23.714664 1916 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:29:23.717087 kubelet[1916]: I1213 14:29:23.717044 1916 policy_none.go:49] "None policy: Start" Dec 13 14:29:23.718225 kubelet[1916]: I1213 14:29:23.718185 1916 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:29:23.718225 kubelet[1916]: I1213 14:29:23.718224 1916 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:29:23.724532 kubelet[1916]: I1213 14:29:23.724489 1916 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:29:23.724834 kubelet[1916]: I1213 14:29:23.724797 1916 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:29:23.728344 kubelet[1916]: E1213 14:29:23.728301 1916 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" not found" Dec 13 14:29:23.751914 kubelet[1916]: I1213 14:29:23.751882 1916 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:23.752453 kubelet[1916]: E1213 14:29:23.752414 1916 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.25:6443/api/v1/nodes\": dial tcp 10.128.0.25:6443: connect: connection refused" node="ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:23.774866 kubelet[1916]: I1213 14:29:23.774806 1916 topology_manager.go:215] "Topology Admit Handler" podUID="f3e256f03bdb24930e0716a1872182fb" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:23.782655 kubelet[1916]: I1213 14:29:23.782618 1916 topology_manager.go:215] "Topology Admit Handler" podUID="f8d9e0681bc279e036bc3ce446934778" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:23.793660 kubelet[1916]: I1213 14:29:23.790495 1916 topology_manager.go:215] "Topology Admit Handler" podUID="9a5d2e1a14c349d22b5d63e96665b331" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:23.847576 kubelet[1916]: I1213 14:29:23.847512 1916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8d9e0681bc279e036bc3ce446934778-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" (UID: \"f8d9e0681bc279e036bc3ce446934778\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:23.847576 kubelet[1916]: I1213 14:29:23.847583 1916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8d9e0681bc279e036bc3ce446934778-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" (UID: \"f8d9e0681bc279e036bc3ce446934778\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:23.847921 kubelet[1916]: I1213 14:29:23.847623 1916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3e256f03bdb24930e0716a1872182fb-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" (UID: \"f3e256f03bdb24930e0716a1872182fb\") " pod="kube-system/kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:23.847921 kubelet[1916]: I1213 14:29:23.847674 1916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3e256f03bdb24930e0716a1872182fb-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" (UID: \"f3e256f03bdb24930e0716a1872182fb\") " pod="kube-system/kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:23.847921 kubelet[1916]: I1213 14:29:23.847711 1916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3e256f03bdb24930e0716a1872182fb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" (UID: \"f3e256f03bdb24930e0716a1872182fb\") " pod="kube-system/kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:23.847921 kubelet[1916]: I1213 14:29:23.847746 1916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8d9e0681bc279e036bc3ce446934778-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" (UID: \"f8d9e0681bc279e036bc3ce446934778\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:23.848125 kubelet[1916]: I1213 14:29:23.847798 1916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8d9e0681bc279e036bc3ce446934778-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" (UID: \"f8d9e0681bc279e036bc3ce446934778\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:23.848125 kubelet[1916]: I1213 14:29:23.847833 1916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8d9e0681bc279e036bc3ce446934778-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" (UID: \"f8d9e0681bc279e036bc3ce446934778\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:23.848125 kubelet[1916]: I1213 14:29:23.847898 1916 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a5d2e1a14c349d22b5d63e96665b331-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" (UID: \"9a5d2e1a14c349d22b5d63e96665b331\") " pod="kube-system/kube-scheduler-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:23.852067 kubelet[1916]: E1213 14:29:23.852016 1916 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.25:6443: connect: connection refused" interval="400ms" Dec 13 14:29:23.959635 kubelet[1916]: I1213 14:29:23.959590 1916 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:23.960127 kubelet[1916]: E1213 14:29:23.960100 1916 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.25:6443/api/v1/nodes\": dial tcp 10.128.0.25:6443: connect: connection refused" node="ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:24.095440 env[1313]: time="2024-12-13T14:29:24.095370135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal,Uid:f3e256f03bdb24930e0716a1872182fb,Namespace:kube-system,Attempt:0,}" Dec 13 14:29:24.111778 env[1313]: time="2024-12-13T14:29:24.111311784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal,Uid:9a5d2e1a14c349d22b5d63e96665b331,Namespace:kube-system,Attempt:0,}" Dec 13 14:29:24.112196 env[1313]: time="2024-12-13T14:29:24.112147718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal,Uid:f8d9e0681bc279e036bc3ce446934778,Namespace:kube-system,Attempt:0,}" Dec 13 14:29:24.253121 kubelet[1916]: E1213 14:29:24.253068 1916 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.25:6443: connect: connection refused" interval="800ms" Dec 13 14:29:24.368176 kubelet[1916]: I1213 14:29:24.367526 1916 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:24.368176 kubelet[1916]: E1213 14:29:24.368082 1916 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.25:6443/api/v1/nodes\": dial tcp 10.128.0.25:6443: connect: connection refused" node="ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:24.499531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2044985714.mount: Deactivated successfully. Dec 13 14:29:24.508384 env[1313]: time="2024-12-13T14:29:24.508321981Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:24.509877 env[1313]: time="2024-12-13T14:29:24.509814466Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:24.513452 env[1313]: time="2024-12-13T14:29:24.513398591Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:24.516127 env[1313]: time="2024-12-13T14:29:24.516068314Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:24.517390 env[1313]: time="2024-12-13T14:29:24.517338081Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:24.518873 env[1313]: time="2024-12-13T14:29:24.518807139Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:24.521405 env[1313]: time="2024-12-13T14:29:24.521359045Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:24.523898 env[1313]: time="2024-12-13T14:29:24.523837853Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:24.526593 env[1313]: time="2024-12-13T14:29:24.526547966Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:24.529689 env[1313]: time="2024-12-13T14:29:24.529654179Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:24.530742 env[1313]: time="2024-12-13T14:29:24.530692913Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:24.531594 env[1313]: time="2024-12-13T14:29:24.531555184Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:24.564719 env[1313]: time="2024-12-13T14:29:24.564592169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:29:24.564719 env[1313]: time="2024-12-13T14:29:24.564661562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:29:24.565206 env[1313]: time="2024-12-13T14:29:24.564682874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:29:24.565206 env[1313]: time="2024-12-13T14:29:24.564996169Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/676a1b7cf0a4c05715073760cbd7b276e54d61816e63963d8489b92db086d212 pid=1954 runtime=io.containerd.runc.v2 Dec 13 14:29:24.644845 env[1313]: time="2024-12-13T14:29:24.642564307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:29:24.644845 env[1313]: time="2024-12-13T14:29:24.642695308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:29:24.644845 env[1313]: time="2024-12-13T14:29:24.642759136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:29:24.644845 env[1313]: time="2024-12-13T14:29:24.643107507Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f13e388acacdfb8798d0129f30d8f6b2c582d43f24f3a0175a45b3df879a2733 pid=1983 runtime=io.containerd.runc.v2 Dec 13 14:29:24.652009 kubelet[1916]: W1213 14:29:24.646543 1916 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Dec 13 14:29:24.652009 kubelet[1916]: E1213 14:29:24.646683 1916 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Dec 13 14:29:24.666002 env[1313]: time="2024-12-13T14:29:24.665873692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:29:24.666390 env[1313]: time="2024-12-13T14:29:24.666295287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:29:24.666683 env[1313]: time="2024-12-13T14:29:24.666615550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:29:24.668893 env[1313]: time="2024-12-13T14:29:24.668768035Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97301efe1cd4605fe9ec84b6691aa95bdf2b28d506e9340bb4c71f86a953553f pid=1987 runtime=io.containerd.runc.v2 Dec 13 14:29:24.743780 kubelet[1916]: W1213 14:29:24.743636 1916 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Dec 13 14:29:24.744073 kubelet[1916]: E1213 14:29:24.743796 1916 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.25:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Dec 13 14:29:24.756163 env[1313]: time="2024-12-13T14:29:24.755942504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal,Uid:9a5d2e1a14c349d22b5d63e96665b331,Namespace:kube-system,Attempt:0,} returns sandbox id \"676a1b7cf0a4c05715073760cbd7b276e54d61816e63963d8489b92db086d212\"" Dec 13 14:29:24.761249 kubelet[1916]: E1213 14:29:24.760649 1916 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-21291" Dec 13 14:29:24.766027 env[1313]: time="2024-12-13T14:29:24.765976551Z" level=info msg="CreateContainer within sandbox \"676a1b7cf0a4c05715073760cbd7b276e54d61816e63963d8489b92db086d212\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:29:24.801392 env[1313]: time="2024-12-13T14:29:24.801329903Z" level=info msg="CreateContainer within sandbox \"676a1b7cf0a4c05715073760cbd7b276e54d61816e63963d8489b92db086d212\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c395bb47bff8d3882c0f4abae316d0a42baa29475f35885f7141e029f69793d2\"" Dec 13 14:29:24.802662 env[1313]: time="2024-12-13T14:29:24.802618030Z" level=info msg="StartContainer for \"c395bb47bff8d3882c0f4abae316d0a42baa29475f35885f7141e029f69793d2\"" Dec 13 14:29:24.808772 env[1313]: time="2024-12-13T14:29:24.808727036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal,Uid:f3e256f03bdb24930e0716a1872182fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"f13e388acacdfb8798d0129f30d8f6b2c582d43f24f3a0175a45b3df879a2733\"" Dec 13 14:29:24.810935 kubelet[1916]: E1213 14:29:24.810513 1916 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-21291" Dec 13 14:29:24.812782 env[1313]: time="2024-12-13T14:29:24.812743582Z" level=info msg="CreateContainer within sandbox \"f13e388acacdfb8798d0129f30d8f6b2c582d43f24f3a0175a45b3df879a2733\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:29:24.815326 kubelet[1916]: W1213 14:29:24.815160 1916 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Dec 13 14:29:24.815326 kubelet[1916]: E1213 14:29:24.815290 1916 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Dec 13 14:29:24.820939 env[1313]: time="2024-12-13T14:29:24.820899678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal,Uid:f8d9e0681bc279e036bc3ce446934778,Namespace:kube-system,Attempt:0,} returns sandbox id \"97301efe1cd4605fe9ec84b6691aa95bdf2b28d506e9340bb4c71f86a953553f\"" Dec 13 14:29:24.822532 kubelet[1916]: E1213 14:29:24.822506 1916 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flat" Dec 13 14:29:24.824548 env[1313]: time="2024-12-13T14:29:24.824504061Z" level=info msg="CreateContainer within sandbox \"97301efe1cd4605fe9ec84b6691aa95bdf2b28d506e9340bb4c71f86a953553f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:29:24.834993 env[1313]: time="2024-12-13T14:29:24.834940726Z" level=info msg="CreateContainer within sandbox \"f13e388acacdfb8798d0129f30d8f6b2c582d43f24f3a0175a45b3df879a2733\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6a4809836817d4b5c999dae45e54c7666ad61e7664f391326d7aec5dd0751df8\"" Dec 13 14:29:24.835931 env[1313]: time="2024-12-13T14:29:24.835871566Z" level=info msg="StartContainer for \"6a4809836817d4b5c999dae45e54c7666ad61e7664f391326d7aec5dd0751df8\"" Dec 13 14:29:24.852876 env[1313]: time="2024-12-13T14:29:24.852770859Z" level=info msg="CreateContainer within sandbox \"97301efe1cd4605fe9ec84b6691aa95bdf2b28d506e9340bb4c71f86a953553f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9a2923861f8a11aa17614c6ca852ed7782af0f6863a004546dbc5c6cf687a22a\"" Dec 13 14:29:24.853726 env[1313]: time="2024-12-13T14:29:24.853666833Z" level=info msg="StartContainer for \"9a2923861f8a11aa17614c6ca852ed7782af0f6863a004546dbc5c6cf687a22a\"" Dec 13 14:29:24.870778 kubelet[1916]: W1213 14:29:24.870646 1916 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Dec 13 14:29:24.870778 kubelet[1916]: E1213 14:29:24.870744 1916 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.25:6443: connect: connection refused Dec 13 14:29:24.985734 env[1313]: time="2024-12-13T14:29:24.984039520Z" level=info msg="StartContainer for \"c395bb47bff8d3882c0f4abae316d0a42baa29475f35885f7141e029f69793d2\" returns successfully" Dec 13 14:29:25.053789 kubelet[1916]: E1213 14:29:25.053708 1916 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.25:6443: connect: connection refused" interval="1.6s" Dec 13 14:29:25.057280 env[1313]: time="2024-12-13T14:29:25.057218990Z" level=info msg="StartContainer for \"6a4809836817d4b5c999dae45e54c7666ad61e7664f391326d7aec5dd0751df8\" returns successfully" Dec 13 14:29:25.068335 env[1313]: time="2024-12-13T14:29:25.068277428Z" level=info msg="StartContainer for \"9a2923861f8a11aa17614c6ca852ed7782af0f6863a004546dbc5c6cf687a22a\" returns successfully" Dec 13 14:29:25.176813 kubelet[1916]: I1213 14:29:25.176764 1916 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:25.177375 kubelet[1916]: E1213 14:29:25.177347 1916 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.25:6443/api/v1/nodes\": dial tcp 10.128.0.25:6443: connect: connection refused" node="ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:26.785174 kubelet[1916]: I1213 14:29:26.785116 1916 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:28.426214 kubelet[1916]: E1213 14:29:28.426169 1916 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" not found" node="ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:28.511374 kubelet[1916]: I1213 14:29:28.511304 1916 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:28.558874 kubelet[1916]: E1213 14:29:28.558818 1916 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal.1810c2e9417fd001 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal,UID:ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 14:29:23.619704833 +0000 UTC m=+0.370239028,LastTimestamp:2024-12-13 14:29:23.619704833 +0000 UTC m=+0.370239028,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal,}" Dec 13 14:29:28.613800 kubelet[1916]: I1213 14:29:28.613744 1916 apiserver.go:52] "Watching apiserver" Dec 13 14:29:28.646961 kubelet[1916]: I1213 14:29:28.646907 1916 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:29:28.780318 update_engine[1302]: I1213 14:29:28.779937 1302 update_attempter.cc:509] Updating boot flags... Dec 13 14:29:28.998795 kubelet[1916]: E1213 14:29:28.998751 1916 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:30.936989 systemd[1]: Reloading. Dec 13 14:29:31.036492 /usr/lib/systemd/system-generators/torcx-generator[2218]: time="2024-12-13T14:29:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:29:31.037626 /usr/lib/systemd/system-generators/torcx-generator[2218]: time="2024-12-13T14:29:31Z" level=info msg="torcx already run" Dec 13 14:29:31.189086 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:29:31.189115 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:29:31.215035 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:29:31.354842 systemd[1]: Stopping kubelet.service... Dec 13 14:29:31.355751 kubelet[1916]: I1213 14:29:31.355315 1916 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:29:31.375805 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:29:31.376342 systemd[1]: Stopped kubelet.service. Dec 13 14:29:31.380482 systemd[1]: Starting kubelet.service... Dec 13 14:29:31.601051 systemd[1]: Started kubelet.service. Dec 13 14:29:31.717096 sudo[2288]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:29:31.717545 sudo[2288]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 14:29:31.738510 kubelet[2277]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:29:31.739092 kubelet[2277]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:29:31.739092 kubelet[2277]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:29:31.739092 kubelet[2277]: I1213 14:29:31.739034 2277 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:29:31.747486 kubelet[2277]: I1213 14:29:31.747454 2277 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:29:31.747695 kubelet[2277]: I1213 14:29:31.747678 2277 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:29:31.748294 kubelet[2277]: I1213 14:29:31.748270 2277 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:29:31.751652 kubelet[2277]: I1213 14:29:31.751630 2277 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:29:31.756052 kubelet[2277]: I1213 14:29:31.756031 2277 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:29:31.770087 kubelet[2277]: I1213 14:29:31.770061 2277 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:29:31.771122 kubelet[2277]: I1213 14:29:31.771102 2277 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:29:31.771892 kubelet[2277]: I1213 14:29:31.771840 2277 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:29:31.772193 kubelet[2277]: I1213 14:29:31.772174 2277 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:29:31.772340 kubelet[2277]: I1213 14:29:31.772303 2277 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:29:31.772492 kubelet[2277]: I1213 14:29:31.772475 2277 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:29:31.772726 kubelet[2277]: I1213 14:29:31.772711 2277 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:29:31.772849 kubelet[2277]: I1213 14:29:31.772836 2277 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:29:31.773117 kubelet[2277]: I1213 14:29:31.773100 2277 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:29:31.773242 kubelet[2277]: I1213 14:29:31.773229 2277 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:29:31.778728 kubelet[2277]: I1213 14:29:31.778709 2277 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:29:31.782347 kubelet[2277]: I1213 14:29:31.782327 2277 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:29:31.784458 kubelet[2277]: I1213 14:29:31.784438 2277 server.go:1256] "Started kubelet" Dec 13 14:29:31.793571 kubelet[2277]: I1213 14:29:31.793549 2277 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:29:31.802252 kubelet[2277]: I1213 14:29:31.802231 2277 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:29:31.808892 kubelet[2277]: I1213 14:29:31.808848 2277 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:29:31.811225 kubelet[2277]: I1213 14:29:31.802585 2277 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:29:31.811606 kubelet[2277]: I1213 14:29:31.811589 2277 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:29:31.811729 kubelet[2277]: I1213 14:29:31.804257 2277 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:29:31.811942 kubelet[2277]: I1213 14:29:31.804289 2277 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:29:31.812432 kubelet[2277]: I1213 14:29:31.812230 2277 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:29:31.889706 kubelet[2277]: I1213 14:29:31.887797 2277 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:29:31.890125 kubelet[2277]: I1213 14:29:31.890093 2277 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:29:31.918196 kubelet[2277]: I1213 14:29:31.918160 2277 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:29:31.919406 kubelet[2277]: I1213 14:29:31.919369 2277 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:29:31.921282 kubelet[2277]: I1213 14:29:31.921253 2277 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:29:31.921413 kubelet[2277]: I1213 14:29:31.921299 2277 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:29:31.921413 kubelet[2277]: I1213 14:29:31.921331 2277 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:29:31.921413 kubelet[2277]: E1213 14:29:31.921404 2277 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:29:31.936236 kubelet[2277]: E1213 14:29:31.936211 2277 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Dec 13 14:29:31.947619 kubelet[2277]: I1213 14:29:31.947587 2277 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:31.969001 kubelet[2277]: I1213 14:29:31.968972 2277 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:31.969291 kubelet[2277]: I1213 14:29:31.969279 2277 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:32.026105 kubelet[2277]: E1213 14:29:32.026062 2277 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:29:32.107762 kubelet[2277]: I1213 14:29:32.107727 2277 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:29:32.108060 kubelet[2277]: I1213 14:29:32.108040 2277 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:29:32.108194 kubelet[2277]: I1213 14:29:32.108182 2277 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:29:32.108515 kubelet[2277]: I1213 14:29:32.108499 2277 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:29:32.108694 kubelet[2277]: I1213 14:29:32.108679 2277 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:29:32.108812 kubelet[2277]: I1213 14:29:32.108798 2277 policy_none.go:49] "None policy: Start" Dec 13 14:29:32.110120 kubelet[2277]: I1213 14:29:32.110088 2277 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:29:32.110307 kubelet[2277]: I1213 14:29:32.110291 2277 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:29:32.110708 kubelet[2277]: I1213 14:29:32.110688 2277 state_mem.go:75] "Updated machine memory state" Dec 13 14:29:32.113582 kubelet[2277]: I1213 14:29:32.113561 2277 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:29:32.119576 kubelet[2277]: I1213 14:29:32.119545 2277 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:29:32.227480 kubelet[2277]: I1213 14:29:32.227324 2277 topology_manager.go:215] "Topology Admit Handler" podUID="f3e256f03bdb24930e0716a1872182fb" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:32.227899 kubelet[2277]: I1213 14:29:32.227847 2277 topology_manager.go:215] "Topology Admit Handler" podUID="f8d9e0681bc279e036bc3ce446934778" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:32.228130 kubelet[2277]: I1213 14:29:32.228083 2277 topology_manager.go:215] "Topology Admit Handler" podUID="9a5d2e1a14c349d22b5d63e96665b331" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:32.236425 kubelet[2277]: W1213 14:29:32.236392 2277 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 14:29:32.238925 kubelet[2277]: W1213 14:29:32.238845 2277 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 14:29:32.241080 kubelet[2277]: W1213 14:29:32.241054 2277 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 14:29:32.321139 kubelet[2277]: I1213 14:29:32.321083 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a5d2e1a14c349d22b5d63e96665b331-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" (UID: \"9a5d2e1a14c349d22b5d63e96665b331\") " pod="kube-system/kube-scheduler-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:32.321401 kubelet[2277]: I1213 14:29:32.321164 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8d9e0681bc279e036bc3ce446934778-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" (UID: \"f8d9e0681bc279e036bc3ce446934778\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:32.321401 kubelet[2277]: I1213 14:29:32.321211 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8d9e0681bc279e036bc3ce446934778-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" (UID: \"f8d9e0681bc279e036bc3ce446934778\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:32.321401 kubelet[2277]: I1213 14:29:32.321248 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8d9e0681bc279e036bc3ce446934778-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" (UID: \"f8d9e0681bc279e036bc3ce446934778\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:32.321401 kubelet[2277]: I1213 14:29:32.321298 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8d9e0681bc279e036bc3ce446934778-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" (UID: \"f8d9e0681bc279e036bc3ce446934778\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:32.321653 kubelet[2277]: I1213 14:29:32.321333 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8d9e0681bc279e036bc3ce446934778-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" (UID: \"f8d9e0681bc279e036bc3ce446934778\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:32.321653 kubelet[2277]: I1213 14:29:32.321393 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3e256f03bdb24930e0716a1872182fb-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" (UID: \"f3e256f03bdb24930e0716a1872182fb\") " pod="kube-system/kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:32.321653 kubelet[2277]: I1213 14:29:32.321433 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3e256f03bdb24930e0716a1872182fb-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" (UID: \"f3e256f03bdb24930e0716a1872182fb\") " pod="kube-system/kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:32.321653 kubelet[2277]: I1213 14:29:32.321479 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3e256f03bdb24930e0716a1872182fb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" (UID: \"f3e256f03bdb24930e0716a1872182fb\") " pod="kube-system/kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" Dec 13 14:29:32.611754 sudo[2288]: pam_unix(sudo:session): session closed for user root Dec 13 14:29:32.806586 kubelet[2277]: I1213 14:29:32.806518 2277 apiserver.go:52] "Watching apiserver" Dec 13 14:29:32.812897 kubelet[2277]: I1213 14:29:32.812816 2277 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:29:32.885690 kubelet[2277]: I1213 14:29:32.885534 2277 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" podStartSLOduration=0.885434129 podStartE2EDuration="885.434129ms" podCreationTimestamp="2024-12-13 14:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:29:32.884783069 +0000 UTC m=+1.264630859" watchObservedRunningTime="2024-12-13 14:29:32.885434129 +0000 UTC m=+1.265281890" Dec 13 14:29:32.886236 kubelet[2277]: I1213 14:29:32.886203 2277 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" podStartSLOduration=0.886153495 podStartE2EDuration="886.153495ms" podCreationTimestamp="2024-12-13 14:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:29:32.870927127 +0000 UTC m=+1.250774883" watchObservedRunningTime="2024-12-13 14:29:32.886153495 +0000 UTC m=+1.266001276" Dec 13 14:29:32.897141 kubelet[2277]: I1213 14:29:32.897101 2277 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" podStartSLOduration=0.897041465 podStartE2EDuration="897.041465ms" podCreationTimestamp="2024-12-13 14:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:29:32.895970648 +0000 UTC m=+1.275818456" watchObservedRunningTime="2024-12-13 14:29:32.897041465 +0000 UTC m=+1.276889236" Dec 13 14:29:35.097225 sudo[1557]: pam_unix(sudo:session): session closed for user root Dec 13 14:29:35.140689 sshd[1553]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:35.147156 systemd[1]: sshd@6-10.128.0.25:22-139.178.68.195:46412.service: Deactivated successfully. Dec 13 14:29:35.149753 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:29:35.150454 systemd-logind[1301]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:29:35.153389 systemd-logind[1301]: Removed session 7. Dec 13 14:29:44.925879 kubelet[2277]: I1213 14:29:44.925808 2277 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:29:44.926986 env[1313]: time="2024-12-13T14:29:44.926914016Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:29:44.927620 kubelet[2277]: I1213 14:29:44.927261 2277 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:29:45.737388 kubelet[2277]: I1213 14:29:45.737318 2277 topology_manager.go:215] "Topology Admit Handler" podUID="af8a081a-8bcc-45d4-b713-a5a20e8bc967" podNamespace="kube-system" podName="kube-proxy-7z72w" Dec 13 14:29:45.750946 kubelet[2277]: I1213 14:29:45.750899 2277 topology_manager.go:215] "Topology Admit Handler" podUID="8ef6187e-6167-481b-a3bc-726709dcb8de" podNamespace="kube-system" podName="cilium-6wlj6" Dec 13 14:29:45.788090 kubelet[2277]: W1213 14:29:45.788038 2277 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal' and this object Dec 13 14:29:45.788682 kubelet[2277]: E1213 14:29:45.788580 2277 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal' and this object Dec 13 14:29:45.790313 kubelet[2277]: W1213 14:29:45.790277 2277 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal' and this object Dec 13 14:29:45.790517 kubelet[2277]: E1213 14:29:45.790492 2277 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal' and this object Dec 13 14:29:45.791930 kubelet[2277]: W1213 14:29:45.791884 2277 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal' and this object Dec 13 14:29:45.792175 kubelet[2277]: E1213 14:29:45.792152 2277 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal' and this object Dec 13 14:29:45.817910 kubelet[2277]: I1213 14:29:45.817867 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af8a081a-8bcc-45d4-b713-a5a20e8bc967-lib-modules\") pod \"kube-proxy-7z72w\" (UID: \"af8a081a-8bcc-45d4-b713-a5a20e8bc967\") " pod="kube-system/kube-proxy-7z72w" Dec 13 14:29:45.818348 kubelet[2277]: I1213 14:29:45.817929 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-lib-modules\") pod \"cilium-6wlj6\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " pod="kube-system/cilium-6wlj6" Dec 13 14:29:45.818348 kubelet[2277]: I1213 14:29:45.817972 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-host-proc-sys-kernel\") pod \"cilium-6wlj6\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " pod="kube-system/cilium-6wlj6" Dec 13 14:29:45.818348 kubelet[2277]: I1213 14:29:45.818006 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-cilium-run\") pod \"cilium-6wlj6\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " pod="kube-system/cilium-6wlj6" Dec 13 14:29:45.818348 kubelet[2277]: I1213 14:29:45.818039 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-bpf-maps\") pod \"cilium-6wlj6\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " pod="kube-system/cilium-6wlj6" Dec 13 14:29:45.818348 kubelet[2277]: I1213 14:29:45.818071 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-hostproc\") pod \"cilium-6wlj6\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " pod="kube-system/cilium-6wlj6" Dec 13 14:29:45.818348 kubelet[2277]: I1213 14:29:45.818105 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-cilium-cgroup\") pod \"cilium-6wlj6\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " pod="kube-system/cilium-6wlj6" Dec 13 14:29:45.818972 kubelet[2277]: I1213 14:29:45.818138 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-host-proc-sys-net\") pod \"cilium-6wlj6\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " pod="kube-system/cilium-6wlj6" Dec 13 14:29:45.818972 kubelet[2277]: I1213 14:29:45.818170 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af8a081a-8bcc-45d4-b713-a5a20e8bc967-xtables-lock\") pod \"kube-proxy-7z72w\" (UID: \"af8a081a-8bcc-45d4-b713-a5a20e8bc967\") " pod="kube-system/kube-proxy-7z72w" Dec 13 14:29:45.818972 kubelet[2277]: I1213 14:29:45.818198 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-xtables-lock\") pod \"cilium-6wlj6\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " pod="kube-system/cilium-6wlj6" Dec 13 14:29:45.818972 kubelet[2277]: I1213 14:29:45.818230 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ef6187e-6167-481b-a3bc-726709dcb8de-hubble-tls\") pod \"cilium-6wlj6\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " pod="kube-system/cilium-6wlj6" Dec 13 14:29:45.818972 kubelet[2277]: I1213 14:29:45.818268 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ef6187e-6167-481b-a3bc-726709dcb8de-cilium-config-path\") pod \"cilium-6wlj6\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " pod="kube-system/cilium-6wlj6" Dec 13 14:29:45.819453 kubelet[2277]: I1213 14:29:45.818302 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6966h\" (UniqueName: \"kubernetes.io/projected/af8a081a-8bcc-45d4-b713-a5a20e8bc967-kube-api-access-6966h\") pod \"kube-proxy-7z72w\" (UID: \"af8a081a-8bcc-45d4-b713-a5a20e8bc967\") " pod="kube-system/kube-proxy-7z72w" Dec 13 14:29:45.819453 kubelet[2277]: I1213 14:29:45.818334 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-cni-path\") pod \"cilium-6wlj6\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " pod="kube-system/cilium-6wlj6" Dec 13 14:29:45.819453 kubelet[2277]: I1213 14:29:45.818369 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-etc-cni-netd\") pod \"cilium-6wlj6\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " pod="kube-system/cilium-6wlj6" Dec 13 14:29:45.819453 kubelet[2277]: I1213 14:29:45.818414 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vswp2\" (UniqueName: \"kubernetes.io/projected/8ef6187e-6167-481b-a3bc-726709dcb8de-kube-api-access-vswp2\") pod \"cilium-6wlj6\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " pod="kube-system/cilium-6wlj6" Dec 13 14:29:45.819453 kubelet[2277]: I1213 14:29:45.818451 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/af8a081a-8bcc-45d4-b713-a5a20e8bc967-kube-proxy\") pod \"kube-proxy-7z72w\" (UID: \"af8a081a-8bcc-45d4-b713-a5a20e8bc967\") " pod="kube-system/kube-proxy-7z72w" Dec 13 14:29:45.819713 kubelet[2277]: I1213 14:29:45.818490 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ef6187e-6167-481b-a3bc-726709dcb8de-clustermesh-secrets\") pod \"cilium-6wlj6\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " pod="kube-system/cilium-6wlj6" Dec 13 14:29:46.040288 kubelet[2277]: I1213 14:29:46.036905 2277 topology_manager.go:215] "Topology Admit Handler" podUID="d5a38997-fa1a-415e-9e17-e9360e842d5a" podNamespace="kube-system" podName="cilium-operator-5cc964979-pp8lh" Dec 13 14:29:46.047799 env[1313]: time="2024-12-13T14:29:46.047141457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7z72w,Uid:af8a081a-8bcc-45d4-b713-a5a20e8bc967,Namespace:kube-system,Attempt:0,}" Dec 13 14:29:46.106118 env[1313]: time="2024-12-13T14:29:46.106001692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:29:46.106447 env[1313]: time="2024-12-13T14:29:46.106064582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:29:46.106447 env[1313]: time="2024-12-13T14:29:46.106426467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:29:46.106983 env[1313]: time="2024-12-13T14:29:46.106909944Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/42d306b9f836a029db22815f378844c025f2f88d38270555de42459c356eeb22 pid=2356 runtime=io.containerd.runc.v2 Dec 13 14:29:46.124686 kubelet[2277]: I1213 14:29:46.124644 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5a38997-fa1a-415e-9e17-e9360e842d5a-cilium-config-path\") pod \"cilium-operator-5cc964979-pp8lh\" (UID: \"d5a38997-fa1a-415e-9e17-e9360e842d5a\") " pod="kube-system/cilium-operator-5cc964979-pp8lh" Dec 13 14:29:46.125009 kubelet[2277]: I1213 14:29:46.124745 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzkpt\" (UniqueName: \"kubernetes.io/projected/d5a38997-fa1a-415e-9e17-e9360e842d5a-kube-api-access-zzkpt\") pod \"cilium-operator-5cc964979-pp8lh\" (UID: \"d5a38997-fa1a-415e-9e17-e9360e842d5a\") " pod="kube-system/cilium-operator-5cc964979-pp8lh" Dec 13 14:29:46.206341 env[1313]: time="2024-12-13T14:29:46.206285457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7z72w,Uid:af8a081a-8bcc-45d4-b713-a5a20e8bc967,Namespace:kube-system,Attempt:0,} returns sandbox id \"42d306b9f836a029db22815f378844c025f2f88d38270555de42459c356eeb22\"" Dec 13 14:29:46.212138 env[1313]: time="2024-12-13T14:29:46.212073542Z" level=info msg="CreateContainer within sandbox \"42d306b9f836a029db22815f378844c025f2f88d38270555de42459c356eeb22\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:29:46.240162 env[1313]: time="2024-12-13T14:29:46.240092580Z" level=info msg="CreateContainer within sandbox \"42d306b9f836a029db22815f378844c025f2f88d38270555de42459c356eeb22\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ac3ddac368dcceda046228da8d45ddc1d590c138f78a56e2afcc1bec157a2ccc\"" Dec 13 14:29:46.241972 env[1313]: time="2024-12-13T14:29:46.240724672Z" level=info msg="StartContainer for \"ac3ddac368dcceda046228da8d45ddc1d590c138f78a56e2afcc1bec157a2ccc\"" Dec 13 14:29:46.337136 env[1313]: time="2024-12-13T14:29:46.337047024Z" level=info msg="StartContainer for \"ac3ddac368dcceda046228da8d45ddc1d590c138f78a56e2afcc1bec157a2ccc\" returns successfully" Dec 13 14:29:46.920779 kubelet[2277]: E1213 14:29:46.920715 2277 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Dec 13 14:29:46.921111 kubelet[2277]: E1213 14:29:46.920799 2277 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-6wlj6: failed to sync secret cache: timed out waiting for the condition Dec 13 14:29:46.921111 kubelet[2277]: E1213 14:29:46.920997 2277 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ef6187e-6167-481b-a3bc-726709dcb8de-hubble-tls podName:8ef6187e-6167-481b-a3bc-726709dcb8de nodeName:}" failed. No retries permitted until 2024-12-13 14:29:47.420914555 +0000 UTC m=+15.800762295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/8ef6187e-6167-481b-a3bc-726709dcb8de-hubble-tls") pod "cilium-6wlj6" (UID: "8ef6187e-6167-481b-a3bc-726709dcb8de") : failed to sync secret cache: timed out waiting for the condition Dec 13 14:29:46.923022 kubelet[2277]: E1213 14:29:46.921760 2277 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Dec 13 14:29:46.923022 kubelet[2277]: E1213 14:29:46.921887 2277 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ef6187e-6167-481b-a3bc-726709dcb8de-clustermesh-secrets podName:8ef6187e-6167-481b-a3bc-726709dcb8de nodeName:}" failed. No retries permitted until 2024-12-13 14:29:47.421843921 +0000 UTC m=+15.801691678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/8ef6187e-6167-481b-a3bc-726709dcb8de-clustermesh-secrets") pod "cilium-6wlj6" (UID: "8ef6187e-6167-481b-a3bc-726709dcb8de") : failed to sync secret cache: timed out waiting for the condition Dec 13 14:29:46.944265 systemd[1]: run-containerd-runc-k8s.io-42d306b9f836a029db22815f378844c025f2f88d38270555de42459c356eeb22-runc.OTP5Xf.mount: Deactivated successfully. Dec 13 14:29:46.947252 env[1313]: time="2024-12-13T14:29:46.946714065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-pp8lh,Uid:d5a38997-fa1a-415e-9e17-e9360e842d5a,Namespace:kube-system,Attempt:0,}" Dec 13 14:29:46.980332 env[1313]: time="2024-12-13T14:29:46.980229710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:29:46.980332 env[1313]: time="2024-12-13T14:29:46.980289544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:29:46.980939 env[1313]: time="2024-12-13T14:29:46.980307890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:29:46.980939 env[1313]: time="2024-12-13T14:29:46.980516980Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/93cca69e987f0892e0134dacabab1f4a05cad8fff0c2c5ceaa344277d7c573af pid=2553 runtime=io.containerd.runc.v2 Dec 13 14:29:47.091997 env[1313]: time="2024-12-13T14:29:47.091931229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-pp8lh,Uid:d5a38997-fa1a-415e-9e17-e9360e842d5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"93cca69e987f0892e0134dacabab1f4a05cad8fff0c2c5ceaa344277d7c573af\"" Dec 13 14:29:47.097849 env[1313]: time="2024-12-13T14:29:47.097775532Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:29:47.557472 env[1313]: time="2024-12-13T14:29:47.557387384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6wlj6,Uid:8ef6187e-6167-481b-a3bc-726709dcb8de,Namespace:kube-system,Attempt:0,}" Dec 13 14:29:47.580928 env[1313]: time="2024-12-13T14:29:47.580615876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:29:47.580928 env[1313]: time="2024-12-13T14:29:47.580680617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:29:47.580928 env[1313]: time="2024-12-13T14:29:47.580701687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:29:47.581417 env[1313]: time="2024-12-13T14:29:47.581001884Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da pid=2596 runtime=io.containerd.runc.v2 Dec 13 14:29:47.644558 env[1313]: time="2024-12-13T14:29:47.644480766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6wlj6,Uid:8ef6187e-6167-481b-a3bc-726709dcb8de,Namespace:kube-system,Attempt:0,} returns sandbox id \"04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da\"" Dec 13 14:29:48.594496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount386884970.mount: Deactivated successfully. Dec 13 14:29:49.511058 env[1313]: time="2024-12-13T14:29:49.510976184Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:49.519812 env[1313]: time="2024-12-13T14:29:49.519746918Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:49.522161 env[1313]: time="2024-12-13T14:29:49.522112342Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:49.523064 env[1313]: time="2024-12-13T14:29:49.523010557Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:29:49.525630 env[1313]: time="2024-12-13T14:29:49.525583997Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:29:49.527820 env[1313]: time="2024-12-13T14:29:49.527770613Z" level=info msg="CreateContainer within sandbox \"93cca69e987f0892e0134dacabab1f4a05cad8fff0c2c5ceaa344277d7c573af\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:29:49.551622 env[1313]: time="2024-12-13T14:29:49.551556366Z" level=info msg="CreateContainer within sandbox \"93cca69e987f0892e0134dacabab1f4a05cad8fff0c2c5ceaa344277d7c573af\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906\"" Dec 13 14:29:49.554342 env[1313]: time="2024-12-13T14:29:49.552626341Z" level=info msg="StartContainer for \"473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906\"" Dec 13 14:29:49.647888 env[1313]: time="2024-12-13T14:29:49.646476805Z" level=info msg="StartContainer for \"473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906\" returns successfully" Dec 13 14:29:50.104987 kubelet[2277]: I1213 14:29:50.104935 2277 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-pp8lh" podStartSLOduration=1.677527746 podStartE2EDuration="4.104842684s" podCreationTimestamp="2024-12-13 14:29:46 +0000 UTC" firstStartedPulling="2024-12-13 14:29:47.096331902 +0000 UTC m=+15.476179653" lastFinishedPulling="2024-12-13 14:29:49.523646857 +0000 UTC m=+17.903494591" observedRunningTime="2024-12-13 14:29:50.101300362 +0000 UTC m=+18.481148122" watchObservedRunningTime="2024-12-13 14:29:50.104842684 +0000 UTC m=+18.484690444" Dec 13 14:29:50.106287 kubelet[2277]: I1213 14:29:50.106252 2277 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7z72w" podStartSLOduration=5.106192837 podStartE2EDuration="5.106192837s" podCreationTimestamp="2024-12-13 14:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:29:47.049997879 +0000 UTC m=+15.429845639" watchObservedRunningTime="2024-12-13 14:29:50.106192837 +0000 UTC m=+18.486040599" Dec 13 14:29:55.957582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount343619891.mount: Deactivated successfully. Dec 13 14:29:59.465838 env[1313]: time="2024-12-13T14:29:59.465768473Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:59.468639 env[1313]: time="2024-12-13T14:29:59.468592368Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:59.470831 env[1313]: time="2024-12-13T14:29:59.470788466Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:59.471830 env[1313]: time="2024-12-13T14:29:59.471777290Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:29:59.477987 env[1313]: time="2024-12-13T14:29:59.477934212Z" level=info msg="CreateContainer within sandbox \"04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:29:59.498431 env[1313]: time="2024-12-13T14:29:59.497019022Z" level=info msg="CreateContainer within sandbox \"04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270\"" Dec 13 14:29:59.500054 env[1313]: time="2024-12-13T14:29:59.499987652Z" level=info msg="StartContainer for \"9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270\"" Dec 13 14:29:59.543538 systemd[1]: run-containerd-runc-k8s.io-9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270-runc.WlPbpK.mount: Deactivated successfully. Dec 13 14:29:59.605523 env[1313]: time="2024-12-13T14:29:59.603304412Z" level=info msg="StartContainer for \"9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270\" returns successfully" Dec 13 14:30:00.491484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270-rootfs.mount: Deactivated successfully. Dec 13 14:30:01.709137 env[1313]: time="2024-12-13T14:30:01.709031760Z" level=info msg="shim disconnected" id=9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270 Dec 13 14:30:01.709137 env[1313]: time="2024-12-13T14:30:01.709123505Z" level=warning msg="cleaning up after shim disconnected" id=9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270 namespace=k8s.io Dec 13 14:30:01.709137 env[1313]: time="2024-12-13T14:30:01.709141178Z" level=info msg="cleaning up dead shim" Dec 13 14:30:01.722766 env[1313]: time="2024-12-13T14:30:01.722686324Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2720 runtime=io.containerd.runc.v2\n" Dec 13 14:30:02.095308 env[1313]: time="2024-12-13T14:30:02.093677529Z" level=info msg="CreateContainer within sandbox \"04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:30:02.119177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1144353463.mount: Deactivated successfully. Dec 13 14:30:02.134032 env[1313]: time="2024-12-13T14:30:02.133949104Z" level=info msg="CreateContainer within sandbox \"04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f2c4fb3b4dc22b6bd108f0bdb95a7be5385b8802a7764edeebf85658fbe72487\"" Dec 13 14:30:02.135218 env[1313]: time="2024-12-13T14:30:02.135176912Z" level=info msg="StartContainer for \"f2c4fb3b4dc22b6bd108f0bdb95a7be5385b8802a7764edeebf85658fbe72487\"" Dec 13 14:30:02.250137 env[1313]: time="2024-12-13T14:30:02.250063392Z" level=info msg="StartContainer for \"f2c4fb3b4dc22b6bd108f0bdb95a7be5385b8802a7764edeebf85658fbe72487\" returns successfully" Dec 13 14:30:02.270447 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:30:02.270799 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:30:02.271478 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:30:02.281232 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:30:02.301601 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:30:02.316088 env[1313]: time="2024-12-13T14:30:02.316003181Z" level=info msg="shim disconnected" id=f2c4fb3b4dc22b6bd108f0bdb95a7be5385b8802a7764edeebf85658fbe72487 Dec 13 14:30:02.316088 env[1313]: time="2024-12-13T14:30:02.316076920Z" level=warning msg="cleaning up after shim disconnected" id=f2c4fb3b4dc22b6bd108f0bdb95a7be5385b8802a7764edeebf85658fbe72487 namespace=k8s.io Dec 13 14:30:02.316463 env[1313]: time="2024-12-13T14:30:02.316099420Z" level=info msg="cleaning up dead shim" Dec 13 14:30:02.330586 env[1313]: time="2024-12-13T14:30:02.330517263Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2788 runtime=io.containerd.runc.v2\n" Dec 13 14:30:03.097463 env[1313]: time="2024-12-13T14:30:03.097296419Z" level=info msg="CreateContainer within sandbox \"04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:30:03.113446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2c4fb3b4dc22b6bd108f0bdb95a7be5385b8802a7764edeebf85658fbe72487-rootfs.mount: Deactivated successfully. Dec 13 14:30:03.148645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232163960.mount: Deactivated successfully. Dec 13 14:30:03.161516 env[1313]: time="2024-12-13T14:30:03.161023560Z" level=info msg="CreateContainer within sandbox \"04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f754d055afef209e2d10574fc3af84ca1463d3723119e36de425330b7d7d583a\"" Dec 13 14:30:03.166179 env[1313]: time="2024-12-13T14:30:03.166131758Z" level=info msg="StartContainer for \"f754d055afef209e2d10574fc3af84ca1463d3723119e36de425330b7d7d583a\"" Dec 13 14:30:03.328973 env[1313]: time="2024-12-13T14:30:03.325807502Z" level=info msg="StartContainer for \"f754d055afef209e2d10574fc3af84ca1463d3723119e36de425330b7d7d583a\" returns successfully" Dec 13 14:30:03.365021 env[1313]: time="2024-12-13T14:30:03.364806411Z" level=info msg="shim disconnected" id=f754d055afef209e2d10574fc3af84ca1463d3723119e36de425330b7d7d583a Dec 13 14:30:03.365021 env[1313]: time="2024-12-13T14:30:03.364913818Z" level=warning msg="cleaning up after shim disconnected" id=f754d055afef209e2d10574fc3af84ca1463d3723119e36de425330b7d7d583a namespace=k8s.io Dec 13 14:30:03.365021 env[1313]: time="2024-12-13T14:30:03.364932654Z" level=info msg="cleaning up dead shim" Dec 13 14:30:03.379324 env[1313]: time="2024-12-13T14:30:03.379263879Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2845 runtime=io.containerd.runc.v2\n" Dec 13 14:30:04.110921 env[1313]: time="2024-12-13T14:30:04.104363068Z" level=info msg="CreateContainer within sandbox \"04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:30:04.118490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f754d055afef209e2d10574fc3af84ca1463d3723119e36de425330b7d7d583a-rootfs.mount: Deactivated successfully. Dec 13 14:30:04.153527 env[1313]: time="2024-12-13T14:30:04.153453929Z" level=info msg="CreateContainer within sandbox \"04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"85572b66ec2a30d41e27bc92541c46c683ef8067905c471671b7b338a613ae17\"" Dec 13 14:30:04.155949 env[1313]: time="2024-12-13T14:30:04.154475407Z" level=info msg="StartContainer for \"85572b66ec2a30d41e27bc92541c46c683ef8067905c471671b7b338a613ae17\"" Dec 13 14:30:04.241145 env[1313]: time="2024-12-13T14:30:04.241071943Z" level=info msg="StartContainer for \"85572b66ec2a30d41e27bc92541c46c683ef8067905c471671b7b338a613ae17\" returns successfully" Dec 13 14:30:04.276454 env[1313]: time="2024-12-13T14:30:04.276379509Z" level=info msg="shim disconnected" id=85572b66ec2a30d41e27bc92541c46c683ef8067905c471671b7b338a613ae17 Dec 13 14:30:04.276939 env[1313]: time="2024-12-13T14:30:04.276904327Z" level=warning msg="cleaning up after shim disconnected" id=85572b66ec2a30d41e27bc92541c46c683ef8067905c471671b7b338a613ae17 namespace=k8s.io Dec 13 14:30:04.277122 env[1313]: time="2024-12-13T14:30:04.277098459Z" level=info msg="cleaning up dead shim" Dec 13 14:30:04.290510 env[1313]: time="2024-12-13T14:30:04.290442200Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2901 runtime=io.containerd.runc.v2\n" Dec 13 14:30:05.111971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85572b66ec2a30d41e27bc92541c46c683ef8067905c471671b7b338a613ae17-rootfs.mount: Deactivated successfully. Dec 13 14:30:05.118794 env[1313]: time="2024-12-13T14:30:05.118733449Z" level=info msg="CreateContainer within sandbox \"04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:30:05.147982 env[1313]: time="2024-12-13T14:30:05.147912033Z" level=info msg="CreateContainer within sandbox \"04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816\"" Dec 13 14:30:05.160170 env[1313]: time="2024-12-13T14:30:05.160108192Z" level=info msg="StartContainer for \"8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816\"" Dec 13 14:30:05.257196 env[1313]: time="2024-12-13T14:30:05.257121933Z" level=info msg="StartContainer for \"8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816\" returns successfully" Dec 13 14:30:05.455309 kubelet[2277]: I1213 14:30:05.453749 2277 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:30:05.494390 kubelet[2277]: I1213 14:30:05.494335 2277 topology_manager.go:215] "Topology Admit Handler" podUID="962aa9f7-8bdd-46c7-b78e-15c5be545509" podNamespace="kube-system" podName="coredns-76f75df574-bl9hz" Dec 13 14:30:05.498064 kubelet[2277]: I1213 14:30:05.498029 2277 topology_manager.go:215] "Topology Admit Handler" podUID="44b3be5b-3370-427e-bff1-a5062526dd61" podNamespace="kube-system" podName="coredns-76f75df574-vsxq5" Dec 13 14:30:05.592629 kubelet[2277]: I1213 14:30:05.592565 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44b3be5b-3370-427e-bff1-a5062526dd61-config-volume\") pod \"coredns-76f75df574-vsxq5\" (UID: \"44b3be5b-3370-427e-bff1-a5062526dd61\") " pod="kube-system/coredns-76f75df574-vsxq5" Dec 13 14:30:05.593038 kubelet[2277]: I1213 14:30:05.593014 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/962aa9f7-8bdd-46c7-b78e-15c5be545509-config-volume\") pod \"coredns-76f75df574-bl9hz\" (UID: \"962aa9f7-8bdd-46c7-b78e-15c5be545509\") " pod="kube-system/coredns-76f75df574-bl9hz" Dec 13 14:30:05.593235 kubelet[2277]: I1213 14:30:05.593219 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p68sf\" (UniqueName: \"kubernetes.io/projected/44b3be5b-3370-427e-bff1-a5062526dd61-kube-api-access-p68sf\") pod \"coredns-76f75df574-vsxq5\" (UID: \"44b3be5b-3370-427e-bff1-a5062526dd61\") " pod="kube-system/coredns-76f75df574-vsxq5" Dec 13 14:30:05.593446 kubelet[2277]: I1213 14:30:05.593406 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vv7l\" (UniqueName: \"kubernetes.io/projected/962aa9f7-8bdd-46c7-b78e-15c5be545509-kube-api-access-9vv7l\") pod \"coredns-76f75df574-bl9hz\" (UID: \"962aa9f7-8bdd-46c7-b78e-15c5be545509\") " pod="kube-system/coredns-76f75df574-bl9hz" Dec 13 14:30:05.821551 env[1313]: time="2024-12-13T14:30:05.821474959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bl9hz,Uid:962aa9f7-8bdd-46c7-b78e-15c5be545509,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:05.823125 env[1313]: time="2024-12-13T14:30:05.823073912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vsxq5,Uid:44b3be5b-3370-427e-bff1-a5062526dd61,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:06.123548 systemd[1]: run-containerd-runc-k8s.io-8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816-runc.kNO8dP.mount: Deactivated successfully. Dec 13 14:30:06.149992 kubelet[2277]: I1213 14:30:06.149068 2277 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6wlj6" podStartSLOduration=9.323080077 podStartE2EDuration="21.149000937s" podCreationTimestamp="2024-12-13 14:29:45 +0000 UTC" firstStartedPulling="2024-12-13 14:29:47.646338451 +0000 UTC m=+16.026186185" lastFinishedPulling="2024-12-13 14:29:59.472259305 +0000 UTC m=+27.852107045" observedRunningTime="2024-12-13 14:30:06.148553646 +0000 UTC m=+34.528401407" watchObservedRunningTime="2024-12-13 14:30:06.149000937 +0000 UTC m=+34.528848701" Dec 13 14:30:07.579994 systemd-networkd[1073]: cilium_host: Link UP Dec 13 14:30:07.580279 systemd-networkd[1073]: cilium_net: Link UP Dec 13 14:30:07.580286 systemd-networkd[1073]: cilium_net: Gained carrier Dec 13 14:30:07.580580 systemd-networkd[1073]: cilium_host: Gained carrier Dec 13 14:30:07.591294 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:30:07.595268 systemd-networkd[1073]: cilium_host: Gained IPv6LL Dec 13 14:30:07.742504 systemd-networkd[1073]: cilium_vxlan: Link UP Dec 13 14:30:07.742515 systemd-networkd[1073]: cilium_vxlan: Gained carrier Dec 13 14:30:07.977564 systemd-networkd[1073]: cilium_net: Gained IPv6LL Dec 13 14:30:08.039893 kernel: NET: Registered PF_ALG protocol family Dec 13 14:30:08.944970 systemd-networkd[1073]: lxc_health: Link UP Dec 13 14:30:08.965986 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:30:08.969065 systemd-networkd[1073]: lxc_health: Gained carrier Dec 13 14:30:09.414680 systemd-networkd[1073]: lxcffbce629b632: Link UP Dec 13 14:30:09.421766 systemd-networkd[1073]: lxca31d01c98db2: Link UP Dec 13 14:30:09.434919 kernel: eth0: renamed from tmp59956 Dec 13 14:30:09.450889 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcffbce629b632: link becomes ready Dec 13 14:30:09.464898 kernel: eth0: renamed from tmpd7dae Dec 13 14:30:09.456382 systemd-networkd[1073]: lxcffbce629b632: Gained carrier Dec 13 14:30:09.485012 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca31d01c98db2: link becomes ready Dec 13 14:30:09.493203 systemd-networkd[1073]: lxca31d01c98db2: Gained carrier Dec 13 14:30:09.729131 systemd-networkd[1073]: cilium_vxlan: Gained IPv6LL Dec 13 14:30:10.177050 systemd-networkd[1073]: lxc_health: Gained IPv6LL Dec 13 14:30:10.817142 systemd-networkd[1073]: lxcffbce629b632: Gained IPv6LL Dec 13 14:30:11.009077 systemd-networkd[1073]: lxca31d01c98db2: Gained IPv6LL Dec 13 14:30:13.168024 kubelet[2277]: I1213 14:30:13.167954 2277 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:30:14.689702 env[1313]: time="2024-12-13T14:30:14.689601903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:14.690653 env[1313]: time="2024-12-13T14:30:14.690602088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:14.690838 env[1313]: time="2024-12-13T14:30:14.690801928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:14.691336 env[1313]: time="2024-12-13T14:30:14.691280959Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/59956b7883b667a9fedc5e273a03a0064c657b6b6f8058323cbcae7336f9b1c4 pid=3436 runtime=io.containerd.runc.v2 Dec 13 14:30:14.840372 env[1313]: time="2024-12-13T14:30:14.840255461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:14.840647 env[1313]: time="2024-12-13T14:30:14.840398017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:14.840647 env[1313]: time="2024-12-13T14:30:14.840463163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:14.840806 env[1313]: time="2024-12-13T14:30:14.840751901Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d7daec2674533e29daf9dd95a5f80ced978c487273461172766b9c249d25ccc3 pid=3474 runtime=io.containerd.runc.v2 Dec 13 14:30:14.863910 env[1313]: time="2024-12-13T14:30:14.863805178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bl9hz,Uid:962aa9f7-8bdd-46c7-b78e-15c5be545509,Namespace:kube-system,Attempt:0,} returns sandbox id \"59956b7883b667a9fedc5e273a03a0064c657b6b6f8058323cbcae7336f9b1c4\"" Dec 13 14:30:14.875418 env[1313]: time="2024-12-13T14:30:14.875356180Z" level=info msg="CreateContainer within sandbox \"59956b7883b667a9fedc5e273a03a0064c657b6b6f8058323cbcae7336f9b1c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:30:14.908180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount118933676.mount: Deactivated successfully. Dec 13 14:30:14.937217 env[1313]: time="2024-12-13T14:30:14.937142332Z" level=info msg="CreateContainer within sandbox \"59956b7883b667a9fedc5e273a03a0064c657b6b6f8058323cbcae7336f9b1c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6db28c7fce4d27e3bd117c99289828a3b0717248a5815dfac990666a8778c1f1\"" Dec 13 14:30:14.943367 env[1313]: time="2024-12-13T14:30:14.942141773Z" level=info msg="StartContainer for \"6db28c7fce4d27e3bd117c99289828a3b0717248a5815dfac990666a8778c1f1\"" Dec 13 14:30:15.079889 env[1313]: time="2024-12-13T14:30:15.078039825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vsxq5,Uid:44b3be5b-3370-427e-bff1-a5062526dd61,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7daec2674533e29daf9dd95a5f80ced978c487273461172766b9c249d25ccc3\"" Dec 13 14:30:15.086502 env[1313]: time="2024-12-13T14:30:15.086129561Z" level=info msg="CreateContainer within sandbox \"d7daec2674533e29daf9dd95a5f80ced978c487273461172766b9c249d25ccc3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:30:15.115723 env[1313]: time="2024-12-13T14:30:15.115643459Z" level=info msg="CreateContainer within sandbox \"d7daec2674533e29daf9dd95a5f80ced978c487273461172766b9c249d25ccc3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"60a33f6bd16004565d24bbd0fee8ed5ee315047e22bcba27d450a7153a21cae1\"" Dec 13 14:30:15.119402 env[1313]: time="2024-12-13T14:30:15.119358076Z" level=info msg="StartContainer for \"60a33f6bd16004565d24bbd0fee8ed5ee315047e22bcba27d450a7153a21cae1\"" Dec 13 14:30:15.149002 env[1313]: time="2024-12-13T14:30:15.148665306Z" level=info msg="StartContainer for \"6db28c7fce4d27e3bd117c99289828a3b0717248a5815dfac990666a8778c1f1\" returns successfully" Dec 13 14:30:15.199428 kubelet[2277]: I1213 14:30:15.198207 2277 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-bl9hz" podStartSLOduration=29.198098807 podStartE2EDuration="29.198098807s" podCreationTimestamp="2024-12-13 14:29:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:30:15.197496565 +0000 UTC m=+43.577344318" watchObservedRunningTime="2024-12-13 14:30:15.198098807 +0000 UTC m=+43.577946569" Dec 13 14:30:15.306905 env[1313]: time="2024-12-13T14:30:15.305686905Z" level=info msg="StartContainer for \"60a33f6bd16004565d24bbd0fee8ed5ee315047e22bcba27d450a7153a21cae1\" returns successfully" Dec 13 14:30:16.195751 kubelet[2277]: I1213 14:30:16.195700 2277 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-vsxq5" podStartSLOduration=30.195595977 podStartE2EDuration="30.195595977s" podCreationTimestamp="2024-12-13 14:29:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:30:16.194798583 +0000 UTC m=+44.574646335" watchObservedRunningTime="2024-12-13 14:30:16.195595977 +0000 UTC m=+44.575443738" Dec 13 14:30:35.312665 systemd[1]: Started sshd@7-10.128.0.25:22-139.178.68.195:53644.service. Dec 13 14:30:35.597737 sshd[3603]: Accepted publickey for core from 139.178.68.195 port 53644 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:30:35.600818 sshd[3603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:35.609790 systemd[1]: Started session-8.scope. Dec 13 14:30:35.611155 systemd-logind[1301]: New session 8 of user core. Dec 13 14:30:35.914725 sshd[3603]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:35.920392 systemd[1]: sshd@7-10.128.0.25:22-139.178.68.195:53644.service: Deactivated successfully. Dec 13 14:30:35.926316 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:30:35.927392 systemd-logind[1301]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:30:35.929231 systemd-logind[1301]: Removed session 8. Dec 13 14:30:40.961101 systemd[1]: Started sshd@8-10.128.0.25:22-139.178.68.195:57418.service. Dec 13 14:30:41.249182 sshd[3622]: Accepted publickey for core from 139.178.68.195 port 57418 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:30:41.251524 sshd[3622]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:41.259618 systemd[1]: Started session-9.scope. Dec 13 14:30:41.261146 systemd-logind[1301]: New session 9 of user core. Dec 13 14:30:41.544716 sshd[3622]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:41.550657 systemd[1]: sshd@8-10.128.0.25:22-139.178.68.195:57418.service: Deactivated successfully. Dec 13 14:30:41.552010 systemd-logind[1301]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:30:41.552699 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:30:41.554502 systemd-logind[1301]: Removed session 9. Dec 13 14:30:46.592679 systemd[1]: Started sshd@9-10.128.0.25:22-139.178.68.195:32928.service. Dec 13 14:30:46.887106 sshd[3637]: Accepted publickey for core from 139.178.68.195 port 32928 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:30:46.889604 sshd[3637]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:46.897570 systemd[1]: Started session-10.scope. Dec 13 14:30:46.899151 systemd-logind[1301]: New session 10 of user core. Dec 13 14:30:47.191761 sshd[3637]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:47.197553 systemd[1]: sshd@9-10.128.0.25:22-139.178.68.195:32928.service: Deactivated successfully. Dec 13 14:30:47.200680 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:30:47.201238 systemd-logind[1301]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:30:47.204501 systemd-logind[1301]: Removed session 10. Dec 13 14:30:52.236146 systemd[1]: Started sshd@10-10.128.0.25:22-139.178.68.195:32930.service. Dec 13 14:30:52.524477 sshd[3650]: Accepted publickey for core from 139.178.68.195 port 32930 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:30:52.526722 sshd[3650]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:52.535428 systemd[1]: Started session-11.scope. Dec 13 14:30:52.536682 systemd-logind[1301]: New session 11 of user core. Dec 13 14:30:52.819292 sshd[3650]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:52.824636 systemd[1]: sshd@10-10.128.0.25:22-139.178.68.195:32930.service: Deactivated successfully. Dec 13 14:30:52.827169 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:30:52.827396 systemd-logind[1301]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:30:52.829967 systemd-logind[1301]: Removed session 11. Dec 13 14:30:57.864914 systemd[1]: Started sshd@11-10.128.0.25:22-139.178.68.195:42930.service. Dec 13 14:30:58.149121 sshd[3663]: Accepted publickey for core from 139.178.68.195 port 42930 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:30:58.151121 sshd[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:58.159055 systemd[1]: Started session-12.scope. Dec 13 14:30:58.160281 systemd-logind[1301]: New session 12 of user core. Dec 13 14:30:58.446790 sshd[3663]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:58.454221 systemd[1]: sshd@11-10.128.0.25:22-139.178.68.195:42930.service: Deactivated successfully. Dec 13 14:30:58.454678 systemd-logind[1301]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:30:58.457186 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:30:58.459984 systemd-logind[1301]: Removed session 12. Dec 13 14:30:58.491601 systemd[1]: Started sshd@12-10.128.0.25:22-139.178.68.195:42934.service. Dec 13 14:30:58.778770 sshd[3676]: Accepted publickey for core from 139.178.68.195 port 42934 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:30:58.781394 sshd[3676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:58.789294 systemd[1]: Started session-13.scope. Dec 13 14:30:58.790279 systemd-logind[1301]: New session 13 of user core. Dec 13 14:30:59.124009 sshd[3676]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:59.130443 systemd[1]: sshd@12-10.128.0.25:22-139.178.68.195:42934.service: Deactivated successfully. Dec 13 14:30:59.132554 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:30:59.133272 systemd-logind[1301]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:30:59.135346 systemd-logind[1301]: Removed session 13. Dec 13 14:30:59.172449 systemd[1]: Started sshd@13-10.128.0.25:22-139.178.68.195:42940.service. Dec 13 14:30:59.462797 sshd[3687]: Accepted publickey for core from 139.178.68.195 port 42940 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:30:59.465617 sshd[3687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:59.473648 systemd[1]: Started session-14.scope. Dec 13 14:30:59.474964 systemd-logind[1301]: New session 14 of user core. Dec 13 14:30:59.777692 sshd[3687]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:59.783900 systemd[1]: sshd@13-10.128.0.25:22-139.178.68.195:42940.service: Deactivated successfully. Dec 13 14:30:59.786909 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:30:59.788011 systemd-logind[1301]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:30:59.790580 systemd-logind[1301]: Removed session 14. Dec 13 14:31:04.823738 systemd[1]: Started sshd@14-10.128.0.25:22-139.178.68.195:42946.service. Dec 13 14:31:05.113646 sshd[3700]: Accepted publickey for core from 139.178.68.195 port 42946 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:31:05.116145 sshd[3700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:05.124444 systemd[1]: Started session-15.scope. Dec 13 14:31:05.125370 systemd-logind[1301]: New session 15 of user core. Dec 13 14:31:05.421890 sshd[3700]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:05.428019 systemd[1]: sshd@14-10.128.0.25:22-139.178.68.195:42946.service: Deactivated successfully. Dec 13 14:31:05.429382 systemd-logind[1301]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:31:05.431211 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:31:05.432984 systemd-logind[1301]: Removed session 15. Dec 13 14:31:10.467602 systemd[1]: Started sshd@15-10.128.0.25:22-139.178.68.195:37902.service. Dec 13 14:31:10.751682 sshd[3716]: Accepted publickey for core from 139.178.68.195 port 37902 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:31:10.754255 sshd[3716]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:10.762819 systemd[1]: Started session-16.scope. Dec 13 14:31:10.763870 systemd-logind[1301]: New session 16 of user core. Dec 13 14:31:11.049198 sshd[3716]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:11.057702 systemd[1]: sshd@15-10.128.0.25:22-139.178.68.195:37902.service: Deactivated successfully. Dec 13 14:31:11.060034 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:31:11.060781 systemd-logind[1301]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:31:11.062240 systemd-logind[1301]: Removed session 16. Dec 13 14:31:16.094003 systemd[1]: Started sshd@16-10.128.0.25:22-139.178.68.195:51942.service. Dec 13 14:31:16.382709 sshd[3729]: Accepted publickey for core from 139.178.68.195 port 51942 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:31:16.385159 sshd[3729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:16.393385 systemd[1]: Started session-17.scope. Dec 13 14:31:16.394609 systemd-logind[1301]: New session 17 of user core. Dec 13 14:31:16.677306 sshd[3729]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:16.682801 systemd[1]: sshd@16-10.128.0.25:22-139.178.68.195:51942.service: Deactivated successfully. Dec 13 14:31:16.684467 systemd-logind[1301]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:31:16.684601 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:31:16.688486 systemd-logind[1301]: Removed session 17. Dec 13 14:31:16.722817 systemd[1]: Started sshd@17-10.128.0.25:22-139.178.68.195:51944.service. Dec 13 14:31:17.012125 sshd[3744]: Accepted publickey for core from 139.178.68.195 port 51944 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:31:17.014226 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:17.022280 systemd[1]: Started session-18.scope. Dec 13 14:31:17.023272 systemd-logind[1301]: New session 18 of user core. Dec 13 14:31:17.398341 sshd[3744]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:17.403966 systemd[1]: sshd@17-10.128.0.25:22-139.178.68.195:51944.service: Deactivated successfully. Dec 13 14:31:17.406562 systemd-logind[1301]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:31:17.406741 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:31:17.409217 systemd-logind[1301]: Removed session 18. Dec 13 14:31:17.443492 systemd[1]: Started sshd@18-10.128.0.25:22-139.178.68.195:51958.service. Dec 13 14:31:17.729941 sshd[3755]: Accepted publickey for core from 139.178.68.195 port 51958 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:31:17.732619 sshd[3755]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:17.741045 systemd[1]: Started session-19.scope. Dec 13 14:31:17.742042 systemd-logind[1301]: New session 19 of user core. Dec 13 14:31:19.690187 sshd[3755]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:19.696299 systemd-logind[1301]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:31:19.696592 systemd[1]: sshd@18-10.128.0.25:22-139.178.68.195:51958.service: Deactivated successfully. Dec 13 14:31:19.698215 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:31:19.699063 systemd-logind[1301]: Removed session 19. Dec 13 14:31:19.741024 systemd[1]: Started sshd@19-10.128.0.25:22-139.178.68.195:51966.service. Dec 13 14:31:20.020335 sshd[3773]: Accepted publickey for core from 139.178.68.195 port 51966 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:31:20.022160 sshd[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:20.030559 systemd[1]: Started session-20.scope. Dec 13 14:31:20.031192 systemd-logind[1301]: New session 20 of user core. Dec 13 14:31:20.453159 sshd[3773]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:20.459230 systemd[1]: sshd@19-10.128.0.25:22-139.178.68.195:51966.service: Deactivated successfully. Dec 13 14:31:20.460787 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:31:20.461023 systemd-logind[1301]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:31:20.463230 systemd-logind[1301]: Removed session 20. Dec 13 14:31:20.501962 systemd[1]: Started sshd@20-10.128.0.25:22-139.178.68.195:51976.service. Dec 13 14:31:20.799384 sshd[3784]: Accepted publickey for core from 139.178.68.195 port 51976 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:31:20.802122 sshd[3784]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:20.810498 systemd[1]: Started session-21.scope. Dec 13 14:31:20.811449 systemd-logind[1301]: New session 21 of user core. Dec 13 14:31:21.102644 sshd[3784]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:21.108100 systemd[1]: sshd@20-10.128.0.25:22-139.178.68.195:51976.service: Deactivated successfully. Dec 13 14:31:21.110983 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:31:21.111754 systemd-logind[1301]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:31:21.114286 systemd-logind[1301]: Removed session 21. Dec 13 14:31:26.149255 systemd[1]: Started sshd@21-10.128.0.25:22-139.178.68.195:33608.service. Dec 13 14:31:26.443302 sshd[3800]: Accepted publickey for core from 139.178.68.195 port 33608 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:31:26.445216 sshd[3800]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:26.453457 systemd[1]: Started session-22.scope. Dec 13 14:31:26.454760 systemd-logind[1301]: New session 22 of user core. Dec 13 14:31:26.748998 sshd[3800]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:26.754451 systemd[1]: sshd@21-10.128.0.25:22-139.178.68.195:33608.service: Deactivated successfully. Dec 13 14:31:26.757092 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:31:26.758041 systemd-logind[1301]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:31:26.760135 systemd-logind[1301]: Removed session 22. Dec 13 14:31:31.794807 systemd[1]: Started sshd@22-10.128.0.25:22-139.178.68.195:33624.service. Dec 13 14:31:32.086008 sshd[3813]: Accepted publickey for core from 139.178.68.195 port 33624 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:31:32.088828 sshd[3813]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:32.097404 systemd[1]: Started session-23.scope. Dec 13 14:31:32.101818 systemd-logind[1301]: New session 23 of user core. Dec 13 14:31:32.381296 sshd[3813]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:32.386648 systemd[1]: sshd@22-10.128.0.25:22-139.178.68.195:33624.service: Deactivated successfully. Dec 13 14:31:32.388786 systemd-logind[1301]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:31:32.388937 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:31:32.391224 systemd-logind[1301]: Removed session 23. Dec 13 14:31:37.425752 systemd[1]: Started sshd@23-10.128.0.25:22-139.178.68.195:58500.service. Dec 13 14:31:37.718120 sshd[3828]: Accepted publickey for core from 139.178.68.195 port 58500 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:31:37.720523 sshd[3828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:37.727896 systemd-logind[1301]: New session 24 of user core. Dec 13 14:31:37.728711 systemd[1]: Started session-24.scope. Dec 13 14:31:38.014465 sshd[3828]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:38.020431 systemd[1]: sshd@23-10.128.0.25:22-139.178.68.195:58500.service: Deactivated successfully. Dec 13 14:31:38.023347 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:31:38.024742 systemd-logind[1301]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:31:38.027453 systemd-logind[1301]: Removed session 24. Dec 13 14:31:38.059209 systemd[1]: Started sshd@24-10.128.0.25:22-139.178.68.195:58516.service. Dec 13 14:31:38.345799 sshd[3841]: Accepted publickey for core from 139.178.68.195 port 58516 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:31:38.348537 sshd[3841]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:38.355765 systemd-logind[1301]: New session 25 of user core. Dec 13 14:31:38.357380 systemd[1]: Started session-25.scope. Dec 13 14:31:40.361623 systemd[1]: run-containerd-runc-k8s.io-8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816-runc.EaD90X.mount: Deactivated successfully. Dec 13 14:31:40.372121 env[1313]: time="2024-12-13T14:31:40.371804154Z" level=info msg="StopContainer for \"473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906\" with timeout 30 (s)" Dec 13 14:31:40.373148 env[1313]: time="2024-12-13T14:31:40.372777382Z" level=info msg="Stop container \"473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906\" with signal terminated" Dec 13 14:31:40.402088 env[1313]: time="2024-12-13T14:31:40.401845014Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:31:40.418608 env[1313]: time="2024-12-13T14:31:40.418551795Z" level=info msg="StopContainer for \"8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816\" with timeout 2 (s)" Dec 13 14:31:40.421178 env[1313]: time="2024-12-13T14:31:40.421124255Z" level=info msg="Stop container \"8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816\" with signal terminated" Dec 13 14:31:40.437238 systemd-networkd[1073]: lxc_health: Link DOWN Dec 13 14:31:40.437264 systemd-networkd[1073]: lxc_health: Lost carrier Dec 13 14:31:40.476751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906-rootfs.mount: Deactivated successfully. Dec 13 14:31:40.497617 env[1313]: time="2024-12-13T14:31:40.497454479Z" level=info msg="shim disconnected" id=473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906 Dec 13 14:31:40.498202 env[1313]: time="2024-12-13T14:31:40.498114460Z" level=warning msg="cleaning up after shim disconnected" id=473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906 namespace=k8s.io Dec 13 14:31:40.498831 env[1313]: time="2024-12-13T14:31:40.498746371Z" level=info msg="cleaning up dead shim" Dec 13 14:31:40.506777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816-rootfs.mount: Deactivated successfully. Dec 13 14:31:40.515797 env[1313]: time="2024-12-13T14:31:40.515726220Z" level=info msg="shim disconnected" id=8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816 Dec 13 14:31:40.516350 env[1313]: time="2024-12-13T14:31:40.516302054Z" level=warning msg="cleaning up after shim disconnected" id=8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816 namespace=k8s.io Dec 13 14:31:40.516524 env[1313]: time="2024-12-13T14:31:40.516489547Z" level=info msg="cleaning up dead shim" Dec 13 14:31:40.525574 env[1313]: time="2024-12-13T14:31:40.525528912Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3911 runtime=io.containerd.runc.v2\n" Dec 13 14:31:40.528553 env[1313]: time="2024-12-13T14:31:40.528497377Z" level=info msg="StopContainer for \"473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906\" returns successfully" Dec 13 14:31:40.529633 env[1313]: time="2024-12-13T14:31:40.529590830Z" level=info msg="StopPodSandbox for \"93cca69e987f0892e0134dacabab1f4a05cad8fff0c2c5ceaa344277d7c573af\"" Dec 13 14:31:40.529831 env[1313]: time="2024-12-13T14:31:40.529706581Z" level=info msg="Container to stop \"473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:31:40.533990 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93cca69e987f0892e0134dacabab1f4a05cad8fff0c2c5ceaa344277d7c573af-shm.mount: Deactivated successfully. Dec 13 14:31:40.544275 env[1313]: time="2024-12-13T14:31:40.544214478Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3920 runtime=io.containerd.runc.v2\n" Dec 13 14:31:40.547052 env[1313]: time="2024-12-13T14:31:40.547002877Z" level=info msg="StopContainer for \"8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816\" returns successfully" Dec 13 14:31:40.547949 env[1313]: time="2024-12-13T14:31:40.547904741Z" level=info msg="StopPodSandbox for \"04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da\"" Dec 13 14:31:40.548183 env[1313]: time="2024-12-13T14:31:40.548153167Z" level=info msg="Container to stop \"f754d055afef209e2d10574fc3af84ca1463d3723119e36de425330b7d7d583a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:31:40.548301 env[1313]: time="2024-12-13T14:31:40.548278403Z" level=info msg="Container to stop \"85572b66ec2a30d41e27bc92541c46c683ef8067905c471671b7b338a613ae17\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:31:40.548405 env[1313]: time="2024-12-13T14:31:40.548378724Z" level=info msg="Container to stop \"8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:31:40.548552 env[1313]: time="2024-12-13T14:31:40.548517885Z" level=info msg="Container to stop \"9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:31:40.548705 env[1313]: time="2024-12-13T14:31:40.548674925Z" level=info msg="Container to stop \"f2c4fb3b4dc22b6bd108f0bdb95a7be5385b8802a7764edeebf85658fbe72487\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:31:40.598791 env[1313]: time="2024-12-13T14:31:40.598716829Z" level=info msg="shim disconnected" id=93cca69e987f0892e0134dacabab1f4a05cad8fff0c2c5ceaa344277d7c573af Dec 13 14:31:40.599351 env[1313]: time="2024-12-13T14:31:40.599302721Z" level=warning msg="cleaning up after shim disconnected" id=93cca69e987f0892e0134dacabab1f4a05cad8fff0c2c5ceaa344277d7c573af namespace=k8s.io Dec 13 14:31:40.599560 env[1313]: time="2024-12-13T14:31:40.599537358Z" level=info msg="cleaning up dead shim" Dec 13 14:31:40.610198 env[1313]: time="2024-12-13T14:31:40.610125977Z" level=info msg="shim disconnected" id=04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da Dec 13 14:31:40.610198 env[1313]: time="2024-12-13T14:31:40.610196654Z" level=warning msg="cleaning up after shim disconnected" id=04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da namespace=k8s.io Dec 13 14:31:40.610512 env[1313]: time="2024-12-13T14:31:40.610212335Z" level=info msg="cleaning up dead shim" Dec 13 14:31:40.628157 env[1313]: time="2024-12-13T14:31:40.627989806Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3977 runtime=io.containerd.runc.v2\n" Dec 13 14:31:40.629394 env[1313]: time="2024-12-13T14:31:40.629346451Z" level=info msg="TearDown network for sandbox \"93cca69e987f0892e0134dacabab1f4a05cad8fff0c2c5ceaa344277d7c573af\" successfully" Dec 13 14:31:40.629608 env[1313]: time="2024-12-13T14:31:40.629576088Z" level=info msg="StopPodSandbox for \"93cca69e987f0892e0134dacabab1f4a05cad8fff0c2c5ceaa344277d7c573af\" returns successfully" Dec 13 14:31:40.636900 env[1313]: time="2024-12-13T14:31:40.636808891Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3984 runtime=io.containerd.runc.v2\n" Dec 13 14:31:40.646874 env[1313]: time="2024-12-13T14:31:40.640614746Z" level=info msg="TearDown network for sandbox \"04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da\" successfully" Dec 13 14:31:40.646874 env[1313]: time="2024-12-13T14:31:40.640654071Z" level=info msg="StopPodSandbox for \"04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da\" returns successfully" Dec 13 14:31:40.801161 kubelet[2277]: I1213 14:31:40.801078 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ef6187e-6167-481b-a3bc-726709dcb8de-hubble-tls\") pod \"8ef6187e-6167-481b-a3bc-726709dcb8de\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " Dec 13 14:31:40.801161 kubelet[2277]: I1213 14:31:40.801173 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5a38997-fa1a-415e-9e17-e9360e842d5a-cilium-config-path\") pod \"d5a38997-fa1a-415e-9e17-e9360e842d5a\" (UID: \"d5a38997-fa1a-415e-9e17-e9360e842d5a\") " Dec 13 14:31:40.802256 kubelet[2277]: I1213 14:31:40.801211 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-bpf-maps\") pod \"8ef6187e-6167-481b-a3bc-726709dcb8de\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " Dec 13 14:31:40.802256 kubelet[2277]: I1213 14:31:40.801301 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-xtables-lock\") pod \"8ef6187e-6167-481b-a3bc-726709dcb8de\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " Dec 13 14:31:40.802256 kubelet[2277]: I1213 14:31:40.801341 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vswp2\" (UniqueName: \"kubernetes.io/projected/8ef6187e-6167-481b-a3bc-726709dcb8de-kube-api-access-vswp2\") pod \"8ef6187e-6167-481b-a3bc-726709dcb8de\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " Dec 13 14:31:40.802256 kubelet[2277]: I1213 14:31:40.801385 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ef6187e-6167-481b-a3bc-726709dcb8de-clustermesh-secrets\") pod \"8ef6187e-6167-481b-a3bc-726709dcb8de\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " Dec 13 14:31:40.802256 kubelet[2277]: I1213 14:31:40.801423 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-etc-cni-netd\") pod \"8ef6187e-6167-481b-a3bc-726709dcb8de\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " Dec 13 14:31:40.802256 kubelet[2277]: I1213 14:31:40.801462 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-lib-modules\") pod \"8ef6187e-6167-481b-a3bc-726709dcb8de\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " Dec 13 14:31:40.802605 kubelet[2277]: I1213 14:31:40.801493 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-hostproc\") pod \"8ef6187e-6167-481b-a3bc-726709dcb8de\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " Dec 13 14:31:40.802605 kubelet[2277]: I1213 14:31:40.801534 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ef6187e-6167-481b-a3bc-726709dcb8de-cilium-config-path\") pod \"8ef6187e-6167-481b-a3bc-726709dcb8de\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " Dec 13 14:31:40.802605 kubelet[2277]: I1213 14:31:40.801579 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-cni-path\") pod \"8ef6187e-6167-481b-a3bc-726709dcb8de\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " Dec 13 14:31:40.802605 kubelet[2277]: I1213 14:31:40.801630 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-host-proc-sys-kernel\") pod \"8ef6187e-6167-481b-a3bc-726709dcb8de\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " Dec 13 14:31:40.802605 kubelet[2277]: I1213 14:31:40.801665 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-cilium-run\") pod \"8ef6187e-6167-481b-a3bc-726709dcb8de\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " Dec 13 14:31:40.802605 kubelet[2277]: I1213 14:31:40.801701 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-cilium-cgroup\") pod \"8ef6187e-6167-481b-a3bc-726709dcb8de\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " Dec 13 14:31:40.802994 kubelet[2277]: I1213 14:31:40.801738 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-host-proc-sys-net\") pod \"8ef6187e-6167-481b-a3bc-726709dcb8de\" (UID: \"8ef6187e-6167-481b-a3bc-726709dcb8de\") " Dec 13 14:31:40.802994 kubelet[2277]: I1213 14:31:40.801782 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzkpt\" (UniqueName: \"kubernetes.io/projected/d5a38997-fa1a-415e-9e17-e9360e842d5a-kube-api-access-zzkpt\") pod \"d5a38997-fa1a-415e-9e17-e9360e842d5a\" (UID: \"d5a38997-fa1a-415e-9e17-e9360e842d5a\") " Dec 13 14:31:40.802994 kubelet[2277]: I1213 14:31:40.802656 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8ef6187e-6167-481b-a3bc-726709dcb8de" (UID: "8ef6187e-6167-481b-a3bc-726709dcb8de"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:40.808182 kubelet[2277]: I1213 14:31:40.808135 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-cni-path" (OuterVolumeSpecName: "cni-path") pod "8ef6187e-6167-481b-a3bc-726709dcb8de" (UID: "8ef6187e-6167-481b-a3bc-726709dcb8de"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:40.808451 kubelet[2277]: I1213 14:31:40.808422 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8ef6187e-6167-481b-a3bc-726709dcb8de" (UID: "8ef6187e-6167-481b-a3bc-726709dcb8de"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:40.808599 kubelet[2277]: I1213 14:31:40.808578 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8ef6187e-6167-481b-a3bc-726709dcb8de" (UID: "8ef6187e-6167-481b-a3bc-726709dcb8de"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:40.808747 kubelet[2277]: I1213 14:31:40.808725 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8ef6187e-6167-481b-a3bc-726709dcb8de" (UID: "8ef6187e-6167-481b-a3bc-726709dcb8de"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:40.808914 kubelet[2277]: I1213 14:31:40.808886 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8ef6187e-6167-481b-a3bc-726709dcb8de" (UID: "8ef6187e-6167-481b-a3bc-726709dcb8de"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:40.809063 kubelet[2277]: I1213 14:31:40.809040 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-hostproc" (OuterVolumeSpecName: "hostproc") pod "8ef6187e-6167-481b-a3bc-726709dcb8de" (UID: "8ef6187e-6167-481b-a3bc-726709dcb8de"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:40.810335 kubelet[2277]: I1213 14:31:40.810293 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5a38997-fa1a-415e-9e17-e9360e842d5a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d5a38997-fa1a-415e-9e17-e9360e842d5a" (UID: "d5a38997-fa1a-415e-9e17-e9360e842d5a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:31:40.810470 kubelet[2277]: I1213 14:31:40.810391 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8ef6187e-6167-481b-a3bc-726709dcb8de" (UID: "8ef6187e-6167-481b-a3bc-726709dcb8de"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:40.810903 kubelet[2277]: I1213 14:31:40.810431 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8ef6187e-6167-481b-a3bc-726709dcb8de" (UID: "8ef6187e-6167-481b-a3bc-726709dcb8de"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:40.811037 kubelet[2277]: I1213 14:31:40.810949 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8ef6187e-6167-481b-a3bc-726709dcb8de" (UID: "8ef6187e-6167-481b-a3bc-726709dcb8de"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:40.811280 kubelet[2277]: I1213 14:31:40.811253 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ef6187e-6167-481b-a3bc-726709dcb8de-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8ef6187e-6167-481b-a3bc-726709dcb8de" (UID: "8ef6187e-6167-481b-a3bc-726709dcb8de"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:31:40.812176 kubelet[2277]: I1213 14:31:40.812140 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ef6187e-6167-481b-a3bc-726709dcb8de-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8ef6187e-6167-481b-a3bc-726709dcb8de" (UID: "8ef6187e-6167-481b-a3bc-726709dcb8de"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:31:40.817341 kubelet[2277]: I1213 14:31:40.817299 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ef6187e-6167-481b-a3bc-726709dcb8de-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8ef6187e-6167-481b-a3bc-726709dcb8de" (UID: "8ef6187e-6167-481b-a3bc-726709dcb8de"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:31:40.817765 kubelet[2277]: I1213 14:31:40.817589 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5a38997-fa1a-415e-9e17-e9360e842d5a-kube-api-access-zzkpt" (OuterVolumeSpecName: "kube-api-access-zzkpt") pod "d5a38997-fa1a-415e-9e17-e9360e842d5a" (UID: "d5a38997-fa1a-415e-9e17-e9360e842d5a"). InnerVolumeSpecName "kube-api-access-zzkpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:31:40.821086 kubelet[2277]: I1213 14:31:40.821030 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ef6187e-6167-481b-a3bc-726709dcb8de-kube-api-access-vswp2" (OuterVolumeSpecName: "kube-api-access-vswp2") pod "8ef6187e-6167-481b-a3bc-726709dcb8de" (UID: "8ef6187e-6167-481b-a3bc-726709dcb8de"). InnerVolumeSpecName "kube-api-access-vswp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:31:40.902944 kubelet[2277]: I1213 14:31:40.902320 2277 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-host-proc-sys-kernel\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:40.902944 kubelet[2277]: I1213 14:31:40.902384 2277 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-cni-path\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:40.902944 kubelet[2277]: I1213 14:31:40.902406 2277 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-cilium-cgroup\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:40.902944 kubelet[2277]: I1213 14:31:40.902424 2277 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-host-proc-sys-net\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:40.902944 kubelet[2277]: I1213 14:31:40.902447 2277 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-cilium-run\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:40.902944 kubelet[2277]: I1213 14:31:40.902469 2277 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zzkpt\" (UniqueName: \"kubernetes.io/projected/d5a38997-fa1a-415e-9e17-e9360e842d5a-kube-api-access-zzkpt\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:40.902944 kubelet[2277]: I1213 14:31:40.902488 2277 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ef6187e-6167-481b-a3bc-726709dcb8de-hubble-tls\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:40.903517 kubelet[2277]: I1213 14:31:40.902507 2277 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5a38997-fa1a-415e-9e17-e9360e842d5a-cilium-config-path\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:40.903517 kubelet[2277]: I1213 14:31:40.902526 2277 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vswp2\" (UniqueName: \"kubernetes.io/projected/8ef6187e-6167-481b-a3bc-726709dcb8de-kube-api-access-vswp2\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:40.903517 kubelet[2277]: I1213 14:31:40.902542 2277 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-bpf-maps\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:40.903517 kubelet[2277]: I1213 14:31:40.902642 2277 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-xtables-lock\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:40.903517 kubelet[2277]: I1213 14:31:40.902670 2277 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ef6187e-6167-481b-a3bc-726709dcb8de-clustermesh-secrets\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:40.903517 kubelet[2277]: I1213 14:31:40.902692 2277 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-etc-cni-netd\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:40.903517 kubelet[2277]: I1213 14:31:40.902713 2277 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-hostproc\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:40.903989 kubelet[2277]: I1213 14:31:40.902734 2277 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ef6187e-6167-481b-a3bc-726709dcb8de-lib-modules\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:40.903989 kubelet[2277]: I1213 14:31:40.902755 2277 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ef6187e-6167-481b-a3bc-726709dcb8de-cilium-config-path\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:41.349585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da-rootfs.mount: Deactivated successfully. Dec 13 14:31:41.350424 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-04640e10e99b5b0cfff2e40e24d0e995161cb65e8f194010260ffa3a4de303da-shm.mount: Deactivated successfully. Dec 13 14:31:41.350942 systemd[1]: var-lib-kubelet-pods-8ef6187e\x2d6167\x2d481b\x2da3bc\x2d726709dcb8de-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:31:41.351352 systemd[1]: var-lib-kubelet-pods-8ef6187e\x2d6167\x2d481b\x2da3bc\x2d726709dcb8de-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:31:41.351714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93cca69e987f0892e0134dacabab1f4a05cad8fff0c2c5ceaa344277d7c573af-rootfs.mount: Deactivated successfully. Dec 13 14:31:41.352000 systemd[1]: var-lib-kubelet-pods-d5a38997\x2dfa1a\x2d415e\x2d9e17\x2de9360e842d5a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzzkpt.mount: Deactivated successfully. Dec 13 14:31:41.352225 systemd[1]: var-lib-kubelet-pods-8ef6187e\x2d6167\x2d481b\x2da3bc\x2d726709dcb8de-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvswp2.mount: Deactivated successfully. Dec 13 14:31:41.398120 kubelet[2277]: I1213 14:31:41.398073 2277 scope.go:117] "RemoveContainer" containerID="8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816" Dec 13 14:31:41.411114 env[1313]: time="2024-12-13T14:31:41.410520774Z" level=info msg="RemoveContainer for \"8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816\"" Dec 13 14:31:41.420332 env[1313]: time="2024-12-13T14:31:41.420273547Z" level=info msg="RemoveContainer for \"8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816\" returns successfully" Dec 13 14:31:41.420974 kubelet[2277]: I1213 14:31:41.420915 2277 scope.go:117] "RemoveContainer" containerID="85572b66ec2a30d41e27bc92541c46c683ef8067905c471671b7b338a613ae17" Dec 13 14:31:41.429512 env[1313]: time="2024-12-13T14:31:41.429440983Z" level=info msg="RemoveContainer for \"85572b66ec2a30d41e27bc92541c46c683ef8067905c471671b7b338a613ae17\"" Dec 13 14:31:41.438723 env[1313]: time="2024-12-13T14:31:41.438615203Z" level=info msg="RemoveContainer for \"85572b66ec2a30d41e27bc92541c46c683ef8067905c471671b7b338a613ae17\" returns successfully" Dec 13 14:31:41.439157 kubelet[2277]: I1213 14:31:41.439113 2277 scope.go:117] "RemoveContainer" containerID="f754d055afef209e2d10574fc3af84ca1463d3723119e36de425330b7d7d583a" Dec 13 14:31:41.443713 env[1313]: time="2024-12-13T14:31:41.443643925Z" level=info msg="RemoveContainer for \"f754d055afef209e2d10574fc3af84ca1463d3723119e36de425330b7d7d583a\"" Dec 13 14:31:41.448148 env[1313]: time="2024-12-13T14:31:41.448095972Z" level=info msg="RemoveContainer for \"f754d055afef209e2d10574fc3af84ca1463d3723119e36de425330b7d7d583a\" returns successfully" Dec 13 14:31:41.448440 kubelet[2277]: I1213 14:31:41.448294 2277 scope.go:117] "RemoveContainer" containerID="f2c4fb3b4dc22b6bd108f0bdb95a7be5385b8802a7764edeebf85658fbe72487" Dec 13 14:31:41.449952 env[1313]: time="2024-12-13T14:31:41.449913641Z" level=info msg="RemoveContainer for \"f2c4fb3b4dc22b6bd108f0bdb95a7be5385b8802a7764edeebf85658fbe72487\"" Dec 13 14:31:41.456027 env[1313]: time="2024-12-13T14:31:41.455933551Z" level=info msg="RemoveContainer for \"f2c4fb3b4dc22b6bd108f0bdb95a7be5385b8802a7764edeebf85658fbe72487\" returns successfully" Dec 13 14:31:41.456496 kubelet[2277]: I1213 14:31:41.456383 2277 scope.go:117] "RemoveContainer" containerID="9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270" Dec 13 14:31:41.459840 env[1313]: time="2024-12-13T14:31:41.459796528Z" level=info msg="RemoveContainer for \"9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270\"" Dec 13 14:31:41.464027 env[1313]: time="2024-12-13T14:31:41.463977208Z" level=info msg="RemoveContainer for \"9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270\" returns successfully" Dec 13 14:31:41.464265 kubelet[2277]: I1213 14:31:41.464229 2277 scope.go:117] "RemoveContainer" containerID="8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816" Dec 13 14:31:41.464603 env[1313]: time="2024-12-13T14:31:41.464505635Z" level=error msg="ContainerStatus for \"8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816\": not found" Dec 13 14:31:41.464787 kubelet[2277]: E1213 14:31:41.464761 2277 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816\": not found" containerID="8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816" Dec 13 14:31:41.465166 kubelet[2277]: I1213 14:31:41.465137 2277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816"} err="failed to get container status \"8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816\": rpc error: code = NotFound desc = an error occurred when try to find container \"8bfbe55aa8ab56999c16bd6f1a58ebb99a5fae05ec937b8858e613031a2a7816\": not found" Dec 13 14:31:41.465166 kubelet[2277]: I1213 14:31:41.465174 2277 scope.go:117] "RemoveContainer" containerID="85572b66ec2a30d41e27bc92541c46c683ef8067905c471671b7b338a613ae17" Dec 13 14:31:41.465512 env[1313]: time="2024-12-13T14:31:41.465427591Z" level=error msg="ContainerStatus for \"85572b66ec2a30d41e27bc92541c46c683ef8067905c471671b7b338a613ae17\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"85572b66ec2a30d41e27bc92541c46c683ef8067905c471671b7b338a613ae17\": not found" Dec 13 14:31:41.465725 kubelet[2277]: E1213 14:31:41.465642 2277 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"85572b66ec2a30d41e27bc92541c46c683ef8067905c471671b7b338a613ae17\": not found" containerID="85572b66ec2a30d41e27bc92541c46c683ef8067905c471671b7b338a613ae17" Dec 13 14:31:41.465725 kubelet[2277]: I1213 14:31:41.465687 2277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"85572b66ec2a30d41e27bc92541c46c683ef8067905c471671b7b338a613ae17"} err="failed to get container status \"85572b66ec2a30d41e27bc92541c46c683ef8067905c471671b7b338a613ae17\": rpc error: code = NotFound desc = an error occurred when try to find container \"85572b66ec2a30d41e27bc92541c46c683ef8067905c471671b7b338a613ae17\": not found" Dec 13 14:31:41.465725 kubelet[2277]: I1213 14:31:41.465704 2277 scope.go:117] "RemoveContainer" containerID="f754d055afef209e2d10574fc3af84ca1463d3723119e36de425330b7d7d583a" Dec 13 14:31:41.468130 env[1313]: time="2024-12-13T14:31:41.467993296Z" level=error msg="ContainerStatus for \"f754d055afef209e2d10574fc3af84ca1463d3723119e36de425330b7d7d583a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f754d055afef209e2d10574fc3af84ca1463d3723119e36de425330b7d7d583a\": not found" Dec 13 14:31:41.469088 kubelet[2277]: E1213 14:31:41.468875 2277 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f754d055afef209e2d10574fc3af84ca1463d3723119e36de425330b7d7d583a\": not found" containerID="f754d055afef209e2d10574fc3af84ca1463d3723119e36de425330b7d7d583a" Dec 13 14:31:41.469088 kubelet[2277]: I1213 14:31:41.468923 2277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f754d055afef209e2d10574fc3af84ca1463d3723119e36de425330b7d7d583a"} err="failed to get container status \"f754d055afef209e2d10574fc3af84ca1463d3723119e36de425330b7d7d583a\": rpc error: code = NotFound desc = an error occurred when try to find container \"f754d055afef209e2d10574fc3af84ca1463d3723119e36de425330b7d7d583a\": not found" Dec 13 14:31:41.469088 kubelet[2277]: I1213 14:31:41.468948 2277 scope.go:117] "RemoveContainer" containerID="f2c4fb3b4dc22b6bd108f0bdb95a7be5385b8802a7764edeebf85658fbe72487" Dec 13 14:31:41.469391 env[1313]: time="2024-12-13T14:31:41.469312208Z" level=error msg="ContainerStatus for \"f2c4fb3b4dc22b6bd108f0bdb95a7be5385b8802a7764edeebf85658fbe72487\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f2c4fb3b4dc22b6bd108f0bdb95a7be5385b8802a7764edeebf85658fbe72487\": not found" Dec 13 14:31:41.469558 kubelet[2277]: E1213 14:31:41.469512 2277 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f2c4fb3b4dc22b6bd108f0bdb95a7be5385b8802a7764edeebf85658fbe72487\": not found" containerID="f2c4fb3b4dc22b6bd108f0bdb95a7be5385b8802a7764edeebf85658fbe72487" Dec 13 14:31:41.469558 kubelet[2277]: I1213 14:31:41.469559 2277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f2c4fb3b4dc22b6bd108f0bdb95a7be5385b8802a7764edeebf85658fbe72487"} err="failed to get container status \"f2c4fb3b4dc22b6bd108f0bdb95a7be5385b8802a7764edeebf85658fbe72487\": rpc error: code = NotFound desc = an error occurred when try to find container \"f2c4fb3b4dc22b6bd108f0bdb95a7be5385b8802a7764edeebf85658fbe72487\": not found" Dec 13 14:31:41.469836 kubelet[2277]: I1213 14:31:41.469576 2277 scope.go:117] "RemoveContainer" containerID="9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270" Dec 13 14:31:41.470002 env[1313]: time="2024-12-13T14:31:41.469821944Z" level=error msg="ContainerStatus for \"9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270\": not found" Dec 13 14:31:41.470125 kubelet[2277]: E1213 14:31:41.470100 2277 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270\": not found" containerID="9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270" Dec 13 14:31:41.470219 kubelet[2277]: I1213 14:31:41.470144 2277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270"} err="failed to get container status \"9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270\": rpc error: code = NotFound desc = an error occurred when try to find container \"9422279fb2ba463feda9c598d54519ecc21590407e0524f5fd6ff7e4e0c0c270\": not found" Dec 13 14:31:41.470219 kubelet[2277]: I1213 14:31:41.470162 2277 scope.go:117] "RemoveContainer" containerID="473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906" Dec 13 14:31:41.471547 env[1313]: time="2024-12-13T14:31:41.471511757Z" level=info msg="RemoveContainer for \"473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906\"" Dec 13 14:31:41.475545 env[1313]: time="2024-12-13T14:31:41.475496626Z" level=info msg="RemoveContainer for \"473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906\" returns successfully" Dec 13 14:31:41.475984 kubelet[2277]: I1213 14:31:41.475732 2277 scope.go:117] "RemoveContainer" containerID="473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906" Dec 13 14:31:41.476311 env[1313]: time="2024-12-13T14:31:41.476236171Z" level=error msg="ContainerStatus for \"473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906\": not found" Dec 13 14:31:41.476551 kubelet[2277]: E1213 14:31:41.476532 2277 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906\": not found" containerID="473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906" Dec 13 14:31:41.476652 kubelet[2277]: I1213 14:31:41.476574 2277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906"} err="failed to get container status \"473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906\": rpc error: code = NotFound desc = an error occurred when try to find container \"473e1cf887a04eaf2e7b34f3158c8753740dbcb613c581e2e0f400a20418e906\": not found" Dec 13 14:31:41.925535 kubelet[2277]: I1213 14:31:41.925490 2277 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8ef6187e-6167-481b-a3bc-726709dcb8de" path="/var/lib/kubelet/pods/8ef6187e-6167-481b-a3bc-726709dcb8de/volumes" Dec 13 14:31:41.926762 kubelet[2277]: I1213 14:31:41.926723 2277 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d5a38997-fa1a-415e-9e17-e9360e842d5a" path="/var/lib/kubelet/pods/d5a38997-fa1a-415e-9e17-e9360e842d5a/volumes" Dec 13 14:31:42.155871 kubelet[2277]: E1213 14:31:42.155788 2277 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:31:42.317176 sshd[3841]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:42.324761 systemd[1]: sshd@24-10.128.0.25:22-139.178.68.195:58516.service: Deactivated successfully. Dec 13 14:31:42.327078 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:31:42.327785 systemd-logind[1301]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:31:42.330258 systemd-logind[1301]: Removed session 25. Dec 13 14:31:42.361069 systemd[1]: Started sshd@25-10.128.0.25:22-139.178.68.195:58532.service. Dec 13 14:31:42.649063 sshd[4009]: Accepted publickey for core from 139.178.68.195 port 58532 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:31:42.650796 sshd[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:42.659154 systemd[1]: Started session-26.scope. Dec 13 14:31:42.660289 systemd-logind[1301]: New session 26 of user core. Dec 13 14:31:43.919687 kubelet[2277]: I1213 14:31:43.919623 2277 topology_manager.go:215] "Topology Admit Handler" podUID="fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" podNamespace="kube-system" podName="cilium-f4mb7" Dec 13 14:31:43.920662 kubelet[2277]: E1213 14:31:43.920635 2277 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ef6187e-6167-481b-a3bc-726709dcb8de" containerName="mount-cgroup" Dec 13 14:31:43.920849 kubelet[2277]: E1213 14:31:43.920829 2277 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ef6187e-6167-481b-a3bc-726709dcb8de" containerName="apply-sysctl-overwrites" Dec 13 14:31:43.921008 kubelet[2277]: E1213 14:31:43.920991 2277 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ef6187e-6167-481b-a3bc-726709dcb8de" containerName="mount-bpf-fs" Dec 13 14:31:43.921115 kubelet[2277]: E1213 14:31:43.921101 2277 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ef6187e-6167-481b-a3bc-726709dcb8de" containerName="cilium-agent" Dec 13 14:31:43.921226 kubelet[2277]: E1213 14:31:43.921212 2277 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5a38997-fa1a-415e-9e17-e9360e842d5a" containerName="cilium-operator" Dec 13 14:31:43.921325 kubelet[2277]: E1213 14:31:43.921311 2277 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ef6187e-6167-481b-a3bc-726709dcb8de" containerName="clean-cilium-state" Dec 13 14:31:43.921475 kubelet[2277]: I1213 14:31:43.921458 2277 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5a38997-fa1a-415e-9e17-e9360e842d5a" containerName="cilium-operator" Dec 13 14:31:43.921593 kubelet[2277]: I1213 14:31:43.921578 2277 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ef6187e-6167-481b-a3bc-726709dcb8de" containerName="cilium-agent" Dec 13 14:31:43.942076 kubelet[2277]: I1213 14:31:43.942034 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-bpf-maps\") pod \"cilium-f4mb7\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " pod="kube-system/cilium-f4mb7" Dec 13 14:31:43.942387 kubelet[2277]: I1213 14:31:43.942360 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-hostproc\") pod \"cilium-f4mb7\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " pod="kube-system/cilium-f4mb7" Dec 13 14:31:43.942645 kubelet[2277]: I1213 14:31:43.942618 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-host-proc-sys-kernel\") pod \"cilium-f4mb7\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " pod="kube-system/cilium-f4mb7" Dec 13 14:31:43.942803 kubelet[2277]: I1213 14:31:43.942784 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-lib-modules\") pod \"cilium-f4mb7\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " pod="kube-system/cilium-f4mb7" Dec 13 14:31:43.943036 kubelet[2277]: I1213 14:31:43.943015 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cilium-config-path\") pod \"cilium-f4mb7\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " pod="kube-system/cilium-f4mb7" Dec 13 14:31:43.943222 kubelet[2277]: I1213 14:31:43.943206 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvcv6\" (UniqueName: \"kubernetes.io/projected/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-kube-api-access-xvcv6\") pod \"cilium-f4mb7\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " pod="kube-system/cilium-f4mb7" Dec 13 14:31:43.943360 kubelet[2277]: I1213 14:31:43.943346 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cilium-run\") pod \"cilium-f4mb7\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " pod="kube-system/cilium-f4mb7" Dec 13 14:31:43.943485 kubelet[2277]: I1213 14:31:43.943472 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cilium-ipsec-secrets\") pod \"cilium-f4mb7\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " pod="kube-system/cilium-f4mb7" Dec 13 14:31:43.943639 kubelet[2277]: I1213 14:31:43.943621 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cilium-cgroup\") pod \"cilium-f4mb7\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " pod="kube-system/cilium-f4mb7" Dec 13 14:31:43.943791 kubelet[2277]: I1213 14:31:43.943772 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-etc-cni-netd\") pod \"cilium-f4mb7\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " pod="kube-system/cilium-f4mb7" Dec 13 14:31:43.946167 sshd[4009]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:43.956359 systemd[1]: sshd@25-10.128.0.25:22-139.178.68.195:58532.service: Deactivated successfully. Dec 13 14:31:43.957749 kubelet[2277]: I1213 14:31:43.957725 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-xtables-lock\") pod \"cilium-f4mb7\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " pod="kube-system/cilium-f4mb7" Dec 13 14:31:43.957949 kubelet[2277]: I1213 14:31:43.957930 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-clustermesh-secrets\") pod \"cilium-f4mb7\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " pod="kube-system/cilium-f4mb7" Dec 13 14:31:43.958170 kubelet[2277]: I1213 14:31:43.958150 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-hubble-tls\") pod \"cilium-f4mb7\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " pod="kube-system/cilium-f4mb7" Dec 13 14:31:43.958406 kubelet[2277]: I1213 14:31:43.958312 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cni-path\") pod \"cilium-f4mb7\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " pod="kube-system/cilium-f4mb7" Dec 13 14:31:43.958601 kubelet[2277]: I1213 14:31:43.958583 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-host-proc-sys-net\") pod \"cilium-f4mb7\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " pod="kube-system/cilium-f4mb7" Dec 13 14:31:43.958982 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:31:43.965992 systemd-logind[1301]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:31:43.969276 systemd-logind[1301]: Removed session 26. Dec 13 14:31:43.992599 systemd[1]: Started sshd@26-10.128.0.25:22-139.178.68.195:58542.service. Dec 13 14:31:44.237943 env[1313]: time="2024-12-13T14:31:44.237066537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f4mb7,Uid:fa80ad6a-54f3-48f2-a9ab-0724ac2530a9,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:44.279227 env[1313]: time="2024-12-13T14:31:44.279104375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:44.279227 env[1313]: time="2024-12-13T14:31:44.279241055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:44.279227 env[1313]: time="2024-12-13T14:31:44.279305031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:44.280272 env[1313]: time="2024-12-13T14:31:44.280175218Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/566f131229c470885de6eb8e84caf087b6454f2cd7ababc19c3fcdcf6658be15 pid=4035 runtime=io.containerd.runc.v2 Dec 13 14:31:44.324199 sshd[4021]: Accepted publickey for core from 139.178.68.195 port 58542 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:31:44.325356 sshd[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:44.338180 systemd[1]: Started session-27.scope. Dec 13 14:31:44.339047 systemd-logind[1301]: New session 27 of user core. Dec 13 14:31:44.369406 env[1313]: time="2024-12-13T14:31:44.368891334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f4mb7,Uid:fa80ad6a-54f3-48f2-a9ab-0724ac2530a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"566f131229c470885de6eb8e84caf087b6454f2cd7ababc19c3fcdcf6658be15\"" Dec 13 14:31:44.374131 env[1313]: time="2024-12-13T14:31:44.374069628Z" level=info msg="CreateContainer within sandbox \"566f131229c470885de6eb8e84caf087b6454f2cd7ababc19c3fcdcf6658be15\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:31:44.399724 env[1313]: time="2024-12-13T14:31:44.399668014Z" level=info msg="CreateContainer within sandbox \"566f131229c470885de6eb8e84caf087b6454f2cd7ababc19c3fcdcf6658be15\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1daade0ac5dfad7caaa74da65d4902fc9c8a573bd6941de83052b4864a680667\"" Dec 13 14:31:44.401101 env[1313]: time="2024-12-13T14:31:44.401057044Z" level=info msg="StartContainer for \"1daade0ac5dfad7caaa74da65d4902fc9c8a573bd6941de83052b4864a680667\"" Dec 13 14:31:44.493020 env[1313]: time="2024-12-13T14:31:44.483918987Z" level=info msg="StartContainer for \"1daade0ac5dfad7caaa74da65d4902fc9c8a573bd6941de83052b4864a680667\" returns successfully" Dec 13 14:31:44.536294 kubelet[2277]: I1213 14:31:44.534568 2277 setters.go:568] "Node became not ready" node="ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:31:44Z","lastTransitionTime":"2024-12-13T14:31:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:31:44.557791 env[1313]: time="2024-12-13T14:31:44.557726898Z" level=info msg="shim disconnected" id=1daade0ac5dfad7caaa74da65d4902fc9c8a573bd6941de83052b4864a680667 Dec 13 14:31:44.558138 env[1313]: time="2024-12-13T14:31:44.557956468Z" level=warning msg="cleaning up after shim disconnected" id=1daade0ac5dfad7caaa74da65d4902fc9c8a573bd6941de83052b4864a680667 namespace=k8s.io Dec 13 14:31:44.558138 env[1313]: time="2024-12-13T14:31:44.557984957Z" level=info msg="cleaning up dead shim" Dec 13 14:31:44.577024 env[1313]: time="2024-12-13T14:31:44.576959118Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4130 runtime=io.containerd.runc.v2\n" Dec 13 14:31:44.699542 sshd[4021]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:44.705093 systemd[1]: sshd@26-10.128.0.25:22-139.178.68.195:58542.service: Deactivated successfully. Dec 13 14:31:44.706620 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 14:31:44.706928 systemd-logind[1301]: Session 27 logged out. Waiting for processes to exit. Dec 13 14:31:44.708801 systemd-logind[1301]: Removed session 27. Dec 13 14:31:44.746571 systemd[1]: Started sshd@27-10.128.0.25:22-139.178.68.195:58556.service. Dec 13 14:31:45.044583 sshd[4145]: Accepted publickey for core from 139.178.68.195 port 58556 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:31:45.047877 sshd[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:31:45.058067 systemd[1]: Started session-28.scope. Dec 13 14:31:45.060490 systemd-logind[1301]: New session 28 of user core. Dec 13 14:31:45.422918 env[1313]: time="2024-12-13T14:31:45.422814365Z" level=info msg="CreateContainer within sandbox \"566f131229c470885de6eb8e84caf087b6454f2cd7ababc19c3fcdcf6658be15\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:31:45.453172 env[1313]: time="2024-12-13T14:31:45.453096371Z" level=info msg="CreateContainer within sandbox \"566f131229c470885de6eb8e84caf087b6454f2cd7ababc19c3fcdcf6658be15\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dd9385647f731d6d0b33121e019cef3d3507522a7cdf7f26fe7a0fe609a281b3\"" Dec 13 14:31:45.454222 env[1313]: time="2024-12-13T14:31:45.454154833Z" level=info msg="StartContainer for \"dd9385647f731d6d0b33121e019cef3d3507522a7cdf7f26fe7a0fe609a281b3\"" Dec 13 14:31:45.566637 env[1313]: time="2024-12-13T14:31:45.566568434Z" level=info msg="StartContainer for \"dd9385647f731d6d0b33121e019cef3d3507522a7cdf7f26fe7a0fe609a281b3\" returns successfully" Dec 13 14:31:45.609908 env[1313]: time="2024-12-13T14:31:45.609802034Z" level=info msg="shim disconnected" id=dd9385647f731d6d0b33121e019cef3d3507522a7cdf7f26fe7a0fe609a281b3 Dec 13 14:31:45.610299 env[1313]: time="2024-12-13T14:31:45.610261566Z" level=warning msg="cleaning up after shim disconnected" id=dd9385647f731d6d0b33121e019cef3d3507522a7cdf7f26fe7a0fe609a281b3 namespace=k8s.io Dec 13 14:31:45.610299 env[1313]: time="2024-12-13T14:31:45.610294107Z" level=info msg="cleaning up dead shim" Dec 13 14:31:45.624904 env[1313]: time="2024-12-13T14:31:45.624795143Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4203 runtime=io.containerd.runc.v2\n" Dec 13 14:31:46.077616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd9385647f731d6d0b33121e019cef3d3507522a7cdf7f26fe7a0fe609a281b3-rootfs.mount: Deactivated successfully. Dec 13 14:31:46.429910 env[1313]: time="2024-12-13T14:31:46.425712764Z" level=info msg="StopPodSandbox for \"566f131229c470885de6eb8e84caf087b6454f2cd7ababc19c3fcdcf6658be15\"" Dec 13 14:31:46.429910 env[1313]: time="2024-12-13T14:31:46.425805070Z" level=info msg="Container to stop \"1daade0ac5dfad7caaa74da65d4902fc9c8a573bd6941de83052b4864a680667\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:31:46.429910 env[1313]: time="2024-12-13T14:31:46.425828553Z" level=info msg="Container to stop \"dd9385647f731d6d0b33121e019cef3d3507522a7cdf7f26fe7a0fe609a281b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:31:46.430210 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-566f131229c470885de6eb8e84caf087b6454f2cd7ababc19c3fcdcf6658be15-shm.mount: Deactivated successfully. Dec 13 14:31:46.488174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-566f131229c470885de6eb8e84caf087b6454f2cd7ababc19c3fcdcf6658be15-rootfs.mount: Deactivated successfully. Dec 13 14:31:46.495548 env[1313]: time="2024-12-13T14:31:46.495475382Z" level=info msg="shim disconnected" id=566f131229c470885de6eb8e84caf087b6454f2cd7ababc19c3fcdcf6658be15 Dec 13 14:31:46.495870 env[1313]: time="2024-12-13T14:31:46.495554263Z" level=warning msg="cleaning up after shim disconnected" id=566f131229c470885de6eb8e84caf087b6454f2cd7ababc19c3fcdcf6658be15 namespace=k8s.io Dec 13 14:31:46.495870 env[1313]: time="2024-12-13T14:31:46.495572140Z" level=info msg="cleaning up dead shim" Dec 13 14:31:46.510882 env[1313]: time="2024-12-13T14:31:46.510763231Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4239 runtime=io.containerd.runc.v2\n" Dec 13 14:31:46.511334 env[1313]: time="2024-12-13T14:31:46.511288431Z" level=info msg="TearDown network for sandbox \"566f131229c470885de6eb8e84caf087b6454f2cd7ababc19c3fcdcf6658be15\" successfully" Dec 13 14:31:46.511334 env[1313]: time="2024-12-13T14:31:46.511333159Z" level=info msg="StopPodSandbox for \"566f131229c470885de6eb8e84caf087b6454f2cd7ababc19c3fcdcf6658be15\" returns successfully" Dec 13 14:31:46.586072 kubelet[2277]: I1213 14:31:46.584839 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-hostproc\") pod \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " Dec 13 14:31:46.586072 kubelet[2277]: I1213 14:31:46.584934 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-host-proc-sys-net\") pod \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " Dec 13 14:31:46.586072 kubelet[2277]: I1213 14:31:46.584973 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-host-proc-sys-kernel\") pod \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " Dec 13 14:31:46.586072 kubelet[2277]: I1213 14:31:46.584967 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-hostproc" (OuterVolumeSpecName: "hostproc") pod "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" (UID: "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:46.586072 kubelet[2277]: I1213 14:31:46.585019 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cilium-ipsec-secrets\") pod \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " Dec 13 14:31:46.587113 kubelet[2277]: I1213 14:31:46.585042 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" (UID: "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:46.587113 kubelet[2277]: I1213 14:31:46.585056 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cilium-cgroup\") pod \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " Dec 13 14:31:46.587113 kubelet[2277]: I1213 14:31:46.585076 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" (UID: "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:46.587113 kubelet[2277]: I1213 14:31:46.585089 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-xtables-lock\") pod \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " Dec 13 14:31:46.587113 kubelet[2277]: I1213 14:31:46.585132 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvcv6\" (UniqueName: \"kubernetes.io/projected/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-kube-api-access-xvcv6\") pod \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " Dec 13 14:31:46.587408 kubelet[2277]: I1213 14:31:46.585167 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-etc-cni-netd\") pod \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " Dec 13 14:31:46.587408 kubelet[2277]: I1213 14:31:46.585202 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-clustermesh-secrets\") pod \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " Dec 13 14:31:46.587408 kubelet[2277]: I1213 14:31:46.585233 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-bpf-maps\") pod \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " Dec 13 14:31:46.587408 kubelet[2277]: I1213 14:31:46.585262 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-lib-modules\") pod \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " Dec 13 14:31:46.587408 kubelet[2277]: I1213 14:31:46.585293 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cilium-run\") pod \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " Dec 13 14:31:46.587408 kubelet[2277]: I1213 14:31:46.585335 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cilium-config-path\") pod \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " Dec 13 14:31:46.587753 kubelet[2277]: I1213 14:31:46.585370 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-hubble-tls\") pod \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " Dec 13 14:31:46.587753 kubelet[2277]: I1213 14:31:46.585399 2277 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cni-path\") pod \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\" (UID: \"fa80ad6a-54f3-48f2-a9ab-0724ac2530a9\") " Dec 13 14:31:46.587753 kubelet[2277]: I1213 14:31:46.585472 2277 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-hostproc\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:46.587753 kubelet[2277]: I1213 14:31:46.585494 2277 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-host-proc-sys-net\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:46.587753 kubelet[2277]: I1213 14:31:46.585516 2277 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-host-proc-sys-kernel\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:46.587753 kubelet[2277]: I1213 14:31:46.585561 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cni-path" (OuterVolumeSpecName: "cni-path") pod "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" (UID: "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:46.588117 kubelet[2277]: I1213 14:31:46.585599 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" (UID: "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:46.588117 kubelet[2277]: I1213 14:31:46.585635 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" (UID: "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:46.588117 kubelet[2277]: I1213 14:31:46.586176 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" (UID: "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:46.588117 kubelet[2277]: I1213 14:31:46.586242 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" (UID: "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:46.591931 kubelet[2277]: I1213 14:31:46.588699 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" (UID: "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:46.592559 kubelet[2277]: I1213 14:31:46.592525 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" (UID: "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:31:46.592788 kubelet[2277]: I1213 14:31:46.592754 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" (UID: "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:31:46.600125 systemd[1]: var-lib-kubelet-pods-fa80ad6a\x2d54f3\x2d48f2\x2da9ab\x2d0724ac2530a9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxvcv6.mount: Deactivated successfully. Dec 13 14:31:46.608278 systemd[1]: var-lib-kubelet-pods-fa80ad6a\x2d54f3\x2d48f2\x2da9ab\x2d0724ac2530a9-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:31:46.610681 kubelet[2277]: I1213 14:31:46.610637 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-kube-api-access-xvcv6" (OuterVolumeSpecName: "kube-api-access-xvcv6") pod "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" (UID: "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9"). InnerVolumeSpecName "kube-api-access-xvcv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:31:46.611500 kubelet[2277]: I1213 14:31:46.611467 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" (UID: "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:31:46.617770 kubelet[2277]: I1213 14:31:46.617727 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" (UID: "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:31:46.618435 kubelet[2277]: I1213 14:31:46.618344 2277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" (UID: "fa80ad6a-54f3-48f2-a9ab-0724ac2530a9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:31:46.685993 kubelet[2277]: I1213 14:31:46.685804 2277 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xvcv6\" (UniqueName: \"kubernetes.io/projected/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-kube-api-access-xvcv6\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:46.686353 kubelet[2277]: I1213 14:31:46.686328 2277 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-etc-cni-netd\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:46.686485 kubelet[2277]: I1213 14:31:46.686469 2277 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-clustermesh-secrets\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:46.686590 kubelet[2277]: I1213 14:31:46.686576 2277 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-bpf-maps\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:46.686709 kubelet[2277]: I1213 14:31:46.686695 2277 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-lib-modules\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:46.686842 kubelet[2277]: I1213 14:31:46.686813 2277 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cilium-run\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:46.687185 kubelet[2277]: I1213 14:31:46.687165 2277 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cilium-config-path\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:46.687329 kubelet[2277]: I1213 14:31:46.687315 2277 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-hubble-tls\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:46.687447 kubelet[2277]: I1213 14:31:46.687428 2277 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cni-path\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:46.687554 kubelet[2277]: I1213 14:31:46.687541 2277 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cilium-ipsec-secrets\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:46.687672 kubelet[2277]: I1213 14:31:46.687657 2277 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-cilium-cgroup\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:46.687795 kubelet[2277]: I1213 14:31:46.687781 2277 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9-xtables-lock\") on node \"ci-3510-3-6-1362b13a5ef71f30ed26.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:31:47.077272 systemd[1]: var-lib-kubelet-pods-fa80ad6a\x2d54f3\x2d48f2\x2da9ab\x2d0724ac2530a9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:31:47.077612 systemd[1]: var-lib-kubelet-pods-fa80ad6a\x2d54f3\x2d48f2\x2da9ab\x2d0724ac2530a9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:31:47.157548 kubelet[2277]: E1213 14:31:47.157500 2277 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:31:47.430700 kubelet[2277]: I1213 14:31:47.430305 2277 scope.go:117] "RemoveContainer" containerID="dd9385647f731d6d0b33121e019cef3d3507522a7cdf7f26fe7a0fe609a281b3" Dec 13 14:31:47.434092 env[1313]: time="2024-12-13T14:31:47.434037624Z" level=info msg="RemoveContainer for \"dd9385647f731d6d0b33121e019cef3d3507522a7cdf7f26fe7a0fe609a281b3\"" Dec 13 14:31:47.440298 env[1313]: time="2024-12-13T14:31:47.440250100Z" level=info msg="RemoveContainer for \"dd9385647f731d6d0b33121e019cef3d3507522a7cdf7f26fe7a0fe609a281b3\" returns successfully" Dec 13 14:31:47.440720 kubelet[2277]: I1213 14:31:47.440696 2277 scope.go:117] "RemoveContainer" containerID="1daade0ac5dfad7caaa74da65d4902fc9c8a573bd6941de83052b4864a680667" Dec 13 14:31:47.443082 env[1313]: time="2024-12-13T14:31:47.442255321Z" level=info msg="RemoveContainer for \"1daade0ac5dfad7caaa74da65d4902fc9c8a573bd6941de83052b4864a680667\"" Dec 13 14:31:47.452930 env[1313]: time="2024-12-13T14:31:47.449968391Z" level=info msg="RemoveContainer for \"1daade0ac5dfad7caaa74da65d4902fc9c8a573bd6941de83052b4864a680667\" returns successfully" Dec 13 14:31:47.490818 kubelet[2277]: I1213 14:31:47.490755 2277 topology_manager.go:215] "Topology Admit Handler" podUID="7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14" podNamespace="kube-system" podName="cilium-qbw5l" Dec 13 14:31:47.491352 kubelet[2277]: E1213 14:31:47.491304 2277 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" containerName="apply-sysctl-overwrites" Dec 13 14:31:47.491582 kubelet[2277]: E1213 14:31:47.491550 2277 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" containerName="mount-cgroup" Dec 13 14:31:47.491795 kubelet[2277]: I1213 14:31:47.491765 2277 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" containerName="apply-sysctl-overwrites" Dec 13 14:31:47.596184 kubelet[2277]: I1213 14:31:47.596099 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14-cilium-run\") pod \"cilium-qbw5l\" (UID: \"7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14\") " pod="kube-system/cilium-qbw5l" Dec 13 14:31:47.596184 kubelet[2277]: I1213 14:31:47.596185 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14-cni-path\") pod \"cilium-qbw5l\" (UID: \"7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14\") " pod="kube-system/cilium-qbw5l" Dec 13 14:31:47.597083 kubelet[2277]: I1213 14:31:47.596225 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14-lib-modules\") pod \"cilium-qbw5l\" (UID: \"7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14\") " pod="kube-system/cilium-qbw5l" Dec 13 14:31:47.597083 kubelet[2277]: I1213 14:31:47.596258 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14-host-proc-sys-kernel\") pod \"cilium-qbw5l\" (UID: \"7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14\") " pod="kube-system/cilium-qbw5l" Dec 13 14:31:47.597083 kubelet[2277]: I1213 14:31:47.596291 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct62x\" (UniqueName: \"kubernetes.io/projected/7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14-kube-api-access-ct62x\") pod \"cilium-qbw5l\" (UID: \"7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14\") " pod="kube-system/cilium-qbw5l" Dec 13 14:31:47.597083 kubelet[2277]: I1213 14:31:47.596327 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14-bpf-maps\") pod \"cilium-qbw5l\" (UID: \"7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14\") " pod="kube-system/cilium-qbw5l" Dec 13 14:31:47.597083 kubelet[2277]: I1213 14:31:47.596393 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14-host-proc-sys-net\") pod \"cilium-qbw5l\" (UID: \"7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14\") " pod="kube-system/cilium-qbw5l" Dec 13 14:31:47.597316 kubelet[2277]: I1213 14:31:47.596439 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14-clustermesh-secrets\") pod \"cilium-qbw5l\" (UID: \"7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14\") " pod="kube-system/cilium-qbw5l" Dec 13 14:31:47.597316 kubelet[2277]: I1213 14:31:47.596479 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14-hostproc\") pod \"cilium-qbw5l\" (UID: \"7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14\") " pod="kube-system/cilium-qbw5l" Dec 13 14:31:47.597316 kubelet[2277]: I1213 14:31:47.596515 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14-cilium-config-path\") pod \"cilium-qbw5l\" (UID: \"7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14\") " pod="kube-system/cilium-qbw5l" Dec 13 14:31:47.597316 kubelet[2277]: I1213 14:31:47.596548 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14-hubble-tls\") pod \"cilium-qbw5l\" (UID: \"7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14\") " pod="kube-system/cilium-qbw5l" Dec 13 14:31:47.597316 kubelet[2277]: I1213 14:31:47.596586 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14-cilium-ipsec-secrets\") pod \"cilium-qbw5l\" (UID: \"7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14\") " pod="kube-system/cilium-qbw5l" Dec 13 14:31:47.597316 kubelet[2277]: I1213 14:31:47.596624 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14-etc-cni-netd\") pod \"cilium-qbw5l\" (UID: \"7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14\") " pod="kube-system/cilium-qbw5l" Dec 13 14:31:47.597526 kubelet[2277]: I1213 14:31:47.596662 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14-xtables-lock\") pod \"cilium-qbw5l\" (UID: \"7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14\") " pod="kube-system/cilium-qbw5l" Dec 13 14:31:47.597526 kubelet[2277]: I1213 14:31:47.596703 2277 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14-cilium-cgroup\") pod \"cilium-qbw5l\" (UID: \"7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14\") " pod="kube-system/cilium-qbw5l" Dec 13 14:31:47.823429 env[1313]: time="2024-12-13T14:31:47.823359953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qbw5l,Uid:7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:47.867595 env[1313]: time="2024-12-13T14:31:47.867474369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:47.867595 env[1313]: time="2024-12-13T14:31:47.867540226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:47.867595 env[1313]: time="2024-12-13T14:31:47.867565756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:47.868325 env[1313]: time="2024-12-13T14:31:47.868264977Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdfb53f14f0827f2b927f8ed9229cc2bc5670909652099d47e65c599c2811af4 pid=4267 runtime=io.containerd.runc.v2 Dec 13 14:31:47.930381 kubelet[2277]: I1213 14:31:47.930340 2277 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fa80ad6a-54f3-48f2-a9ab-0724ac2530a9" path="/var/lib/kubelet/pods/fa80ad6a-54f3-48f2-a9ab-0724ac2530a9/volumes" Dec 13 14:31:47.942372 env[1313]: time="2024-12-13T14:31:47.942308556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qbw5l,Uid:7e47258b-4ce8-4e8a-bf2b-ffcb68bdee14,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdfb53f14f0827f2b927f8ed9229cc2bc5670909652099d47e65c599c2811af4\"" Dec 13 14:31:47.947982 env[1313]: time="2024-12-13T14:31:47.947921000Z" level=info msg="CreateContainer within sandbox \"fdfb53f14f0827f2b927f8ed9229cc2bc5670909652099d47e65c599c2811af4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:31:47.964414 env[1313]: time="2024-12-13T14:31:47.964350478Z" level=info msg="CreateContainer within sandbox \"fdfb53f14f0827f2b927f8ed9229cc2bc5670909652099d47e65c599c2811af4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f1be7e0a984b1895e6ad1045e801f1f6cb2a1d0357cc890a5b883245baae825c\"" Dec 13 14:31:47.965900 env[1313]: time="2024-12-13T14:31:47.965725390Z" level=info msg="StartContainer for \"f1be7e0a984b1895e6ad1045e801f1f6cb2a1d0357cc890a5b883245baae825c\"" Dec 13 14:31:48.059253 env[1313]: time="2024-12-13T14:31:48.059177851Z" level=info msg="StartContainer for \"f1be7e0a984b1895e6ad1045e801f1f6cb2a1d0357cc890a5b883245baae825c\" returns successfully" Dec 13 14:31:48.106535 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1be7e0a984b1895e6ad1045e801f1f6cb2a1d0357cc890a5b883245baae825c-rootfs.mount: Deactivated successfully. Dec 13 14:31:48.117603 env[1313]: time="2024-12-13T14:31:48.117501487Z" level=info msg="shim disconnected" id=f1be7e0a984b1895e6ad1045e801f1f6cb2a1d0357cc890a5b883245baae825c Dec 13 14:31:48.117914 env[1313]: time="2024-12-13T14:31:48.117601394Z" level=warning msg="cleaning up after shim disconnected" id=f1be7e0a984b1895e6ad1045e801f1f6cb2a1d0357cc890a5b883245baae825c namespace=k8s.io Dec 13 14:31:48.117914 env[1313]: time="2024-12-13T14:31:48.117621102Z" level=info msg="cleaning up dead shim" Dec 13 14:31:48.131602 env[1313]: time="2024-12-13T14:31:48.131516115Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4354 runtime=io.containerd.runc.v2\n" Dec 13 14:31:48.441091 env[1313]: time="2024-12-13T14:31:48.440822365Z" level=info msg="CreateContainer within sandbox \"fdfb53f14f0827f2b927f8ed9229cc2bc5670909652099d47e65c599c2811af4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:31:48.468821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount38884577.mount: Deactivated successfully. Dec 13 14:31:48.484018 env[1313]: time="2024-12-13T14:31:48.483943597Z" level=info msg="CreateContainer within sandbox \"fdfb53f14f0827f2b927f8ed9229cc2bc5670909652099d47e65c599c2811af4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1eafe785a8cb1dc281ed4788b65b598ba7d98d9933d6ea30ffbde50c52e46f3a\"" Dec 13 14:31:48.486935 env[1313]: time="2024-12-13T14:31:48.486815184Z" level=info msg="StartContainer for \"1eafe785a8cb1dc281ed4788b65b598ba7d98d9933d6ea30ffbde50c52e46f3a\"" Dec 13 14:31:48.570142 env[1313]: time="2024-12-13T14:31:48.570072479Z" level=info msg="StartContainer for \"1eafe785a8cb1dc281ed4788b65b598ba7d98d9933d6ea30ffbde50c52e46f3a\" returns successfully" Dec 13 14:31:48.607529 env[1313]: time="2024-12-13T14:31:48.607456420Z" level=info msg="shim disconnected" id=1eafe785a8cb1dc281ed4788b65b598ba7d98d9933d6ea30ffbde50c52e46f3a Dec 13 14:31:48.607963 env[1313]: time="2024-12-13T14:31:48.607538556Z" level=warning msg="cleaning up after shim disconnected" id=1eafe785a8cb1dc281ed4788b65b598ba7d98d9933d6ea30ffbde50c52e46f3a namespace=k8s.io Dec 13 14:31:48.607963 env[1313]: time="2024-12-13T14:31:48.607554815Z" level=info msg="cleaning up dead shim" Dec 13 14:31:48.622684 env[1313]: time="2024-12-13T14:31:48.622620883Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4419 runtime=io.containerd.runc.v2\n" Dec 13 14:31:49.466893 env[1313]: time="2024-12-13T14:31:49.461200900Z" level=info msg="CreateContainer within sandbox \"fdfb53f14f0827f2b927f8ed9229cc2bc5670909652099d47e65c599c2811af4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:31:49.481040 env[1313]: time="2024-12-13T14:31:49.480447247Z" level=info msg="CreateContainer within sandbox \"fdfb53f14f0827f2b927f8ed9229cc2bc5670909652099d47e65c599c2811af4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cbb9eb2b3a7f5994cd1a4a59a5d7b75e576d6ef189fa09910cfc13dc95565183\"" Dec 13 14:31:49.483230 env[1313]: time="2024-12-13T14:31:49.481831196Z" level=info msg="StartContainer for \"cbb9eb2b3a7f5994cd1a4a59a5d7b75e576d6ef189fa09910cfc13dc95565183\"" Dec 13 14:31:49.611219 env[1313]: time="2024-12-13T14:31:49.611152010Z" level=info msg="StartContainer for \"cbb9eb2b3a7f5994cd1a4a59a5d7b75e576d6ef189fa09910cfc13dc95565183\" returns successfully" Dec 13 14:31:49.658700 env[1313]: time="2024-12-13T14:31:49.658607748Z" level=info msg="shim disconnected" id=cbb9eb2b3a7f5994cd1a4a59a5d7b75e576d6ef189fa09910cfc13dc95565183 Dec 13 14:31:49.658700 env[1313]: time="2024-12-13T14:31:49.658687145Z" level=warning msg="cleaning up after shim disconnected" id=cbb9eb2b3a7f5994cd1a4a59a5d7b75e576d6ef189fa09910cfc13dc95565183 namespace=k8s.io Dec 13 14:31:49.658700 env[1313]: time="2024-12-13T14:31:49.658703403Z" level=info msg="cleaning up dead shim" Dec 13 14:31:49.672671 env[1313]: time="2024-12-13T14:31:49.672603371Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4478 runtime=io.containerd.runc.v2\n" Dec 13 14:31:50.078938 systemd[1]: run-containerd-runc-k8s.io-cbb9eb2b3a7f5994cd1a4a59a5d7b75e576d6ef189fa09910cfc13dc95565183-runc.rEx7xG.mount: Deactivated successfully. Dec 13 14:31:50.079245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbb9eb2b3a7f5994cd1a4a59a5d7b75e576d6ef189fa09910cfc13dc95565183-rootfs.mount: Deactivated successfully. Dec 13 14:31:50.462750 env[1313]: time="2024-12-13T14:31:50.462301505Z" level=info msg="CreateContainer within sandbox \"fdfb53f14f0827f2b927f8ed9229cc2bc5670909652099d47e65c599c2811af4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:31:50.497016 env[1313]: time="2024-12-13T14:31:50.493971903Z" level=info msg="CreateContainer within sandbox \"fdfb53f14f0827f2b927f8ed9229cc2bc5670909652099d47e65c599c2811af4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"baae11107f2f66ef8ac2579dd46cb037c628184b0090d451fdea43dfff47dd99\"" Dec 13 14:31:50.495918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2277491232.mount: Deactivated successfully. Dec 13 14:31:50.498630 env[1313]: time="2024-12-13T14:31:50.498543367Z" level=info msg="StartContainer for \"baae11107f2f66ef8ac2579dd46cb037c628184b0090d451fdea43dfff47dd99\"" Dec 13 14:31:50.583442 env[1313]: time="2024-12-13T14:31:50.583260206Z" level=info msg="StartContainer for \"baae11107f2f66ef8ac2579dd46cb037c628184b0090d451fdea43dfff47dd99\" returns successfully" Dec 13 14:31:50.617721 env[1313]: time="2024-12-13T14:31:50.617646374Z" level=info msg="shim disconnected" id=baae11107f2f66ef8ac2579dd46cb037c628184b0090d451fdea43dfff47dd99 Dec 13 14:31:50.617721 env[1313]: time="2024-12-13T14:31:50.617723381Z" level=warning msg="cleaning up after shim disconnected" id=baae11107f2f66ef8ac2579dd46cb037c628184b0090d451fdea43dfff47dd99 namespace=k8s.io Dec 13 14:31:50.618166 env[1313]: time="2024-12-13T14:31:50.617739142Z" level=info msg="cleaning up dead shim" Dec 13 14:31:50.632049 env[1313]: time="2024-12-13T14:31:50.631986143Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4534 runtime=io.containerd.runc.v2\n" Dec 13 14:31:51.079126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-baae11107f2f66ef8ac2579dd46cb037c628184b0090d451fdea43dfff47dd99-rootfs.mount: Deactivated successfully. Dec 13 14:31:51.468458 env[1313]: time="2024-12-13T14:31:51.468294580Z" level=info msg="CreateContainer within sandbox \"fdfb53f14f0827f2b927f8ed9229cc2bc5670909652099d47e65c599c2811af4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:31:51.499730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1700863685.mount: Deactivated successfully. Dec 13 14:31:51.509170 env[1313]: time="2024-12-13T14:31:51.509088335Z" level=info msg="CreateContainer within sandbox \"fdfb53f14f0827f2b927f8ed9229cc2bc5670909652099d47e65c599c2811af4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"20278eeb39216d34e62cd6fcf80bb01272b38acb396d167d6cc96201d0968fa6\"" Dec 13 14:31:51.516068 env[1313]: time="2024-12-13T14:31:51.515849798Z" level=info msg="StartContainer for \"20278eeb39216d34e62cd6fcf80bb01272b38acb396d167d6cc96201d0968fa6\"" Dec 13 14:31:51.622830 env[1313]: time="2024-12-13T14:31:51.620312184Z" level=info msg="StartContainer for \"20278eeb39216d34e62cd6fcf80bb01272b38acb396d167d6cc96201d0968fa6\" returns successfully" Dec 13 14:31:52.096909 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:31:52.491212 kubelet[2277]: I1213 14:31:52.491040 2277 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-qbw5l" podStartSLOduration=5.490973475 podStartE2EDuration="5.490973475s" podCreationTimestamp="2024-12-13 14:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:31:52.490435969 +0000 UTC m=+140.870283732" watchObservedRunningTime="2024-12-13 14:31:52.490973475 +0000 UTC m=+140.870821243" Dec 13 14:31:53.582129 systemd[1]: run-containerd-runc-k8s.io-20278eeb39216d34e62cd6fcf80bb01272b38acb396d167d6cc96201d0968fa6-runc.Pp0JYs.mount: Deactivated successfully. Dec 13 14:31:55.560116 systemd-networkd[1073]: lxc_health: Link UP Dec 13 14:31:55.568438 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:31:55.571654 systemd-networkd[1073]: lxc_health: Gained carrier Dec 13 14:31:56.993964 systemd-networkd[1073]: lxc_health: Gained IPv6LL Dec 13 14:31:58.282621 systemd[1]: run-containerd-runc-k8s.io-20278eeb39216d34e62cd6fcf80bb01272b38acb396d167d6cc96201d0968fa6-runc.gK8HeR.mount: Deactivated successfully. Dec 13 14:32:00.587458 systemd[1]: run-containerd-runc-k8s.io-20278eeb39216d34e62cd6fcf80bb01272b38acb396d167d6cc96201d0968fa6-runc.c8nlMZ.mount: Deactivated successfully. Dec 13 14:32:00.753228 sshd[4145]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:00.759983 systemd[1]: sshd@27-10.128.0.25:22-139.178.68.195:58556.service: Deactivated successfully. Dec 13 14:32:00.761534 systemd-logind[1301]: Session 28 logged out. Waiting for processes to exit. Dec 13 14:32:00.763290 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 14:32:00.765271 systemd-logind[1301]: Removed session 28.