Sep 13 01:30:16.990524 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 01:30:16.990542 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 12 23:05:37 -00 2025 Sep 13 01:30:16.990549 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 13 01:30:16.990556 kernel: printk: bootconsole [pl11] enabled Sep 13 01:30:16.990561 kernel: efi: EFI v2.70 by EDK II Sep 13 01:30:16.990567 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3761cf98 Sep 13 01:30:16.997607 kernel: random: crng init done Sep 13 01:30:16.997630 kernel: ACPI: Early table checksum verification disabled Sep 13 01:30:16.997637 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 13 01:30:16.997642 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:30:16.997648 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:30:16.997653 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 13 01:30:16.997663 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:30:16.997669 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:30:16.997676 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:30:16.997682 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:30:16.997688 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:30:16.997695 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:30:16.997701 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 13 01:30:16.997707 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:30:16.997712 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 13 01:30:16.997718 kernel: NUMA: Failed to initialise from firmware Sep 13 01:30:16.997724 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Sep 13 01:30:16.997730 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] Sep 13 01:30:16.997736 kernel: Zone ranges: Sep 13 01:30:16.997742 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 13 01:30:16.997747 kernel: DMA32 empty Sep 13 01:30:16.997753 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 13 01:30:16.997760 kernel: Movable zone start for each node Sep 13 01:30:16.997766 kernel: Early memory node ranges Sep 13 01:30:16.997771 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 13 01:30:16.997777 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 13 01:30:16.997783 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 13 01:30:16.997788 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 13 01:30:16.997794 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 13 01:30:16.997800 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 13 01:30:16.997805 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 13 01:30:16.997811 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 13 01:30:16.997817 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 13 01:30:16.997823 kernel: psci: probing for conduit method from ACPI. Sep 13 01:30:16.997832 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 01:30:16.997838 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 01:30:16.997845 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 13 01:30:16.997851 kernel: psci: SMC Calling Convention v1.4 Sep 13 01:30:16.997857 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Sep 13 01:30:16.997864 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Sep 13 01:30:16.997870 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Sep 13 01:30:16.997876 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Sep 13 01:30:16.997882 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 13 01:30:16.997888 kernel: Detected PIPT I-cache on CPU0 Sep 13 01:30:16.997895 kernel: CPU features: detected: GIC system register CPU interface Sep 13 01:30:16.997901 kernel: CPU features: detected: Hardware dirty bit management Sep 13 01:30:16.997907 kernel: CPU features: detected: Spectre-BHB Sep 13 01:30:16.997913 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 01:30:16.997919 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 01:30:16.997926 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 01:30:16.997933 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 13 01:30:16.997939 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 01:30:16.997946 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 13 01:30:16.997952 kernel: Policy zone: Normal Sep 13 01:30:16.997959 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 01:30:16.997966 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 01:30:16.997972 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 01:30:16.997978 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 01:30:16.997984 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 01:30:16.997990 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Sep 13 01:30:16.997997 kernel: Memory: 3986876K/4194160K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 207284K reserved, 0K cma-reserved) Sep 13 01:30:16.998005 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 01:30:16.998011 kernel: trace event string verifier disabled Sep 13 01:30:16.998017 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 01:30:16.998023 kernel: rcu: RCU event tracing is enabled. Sep 13 01:30:16.998030 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 01:30:16.998036 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 01:30:16.998042 kernel: Tracing variant of Tasks RCU enabled. Sep 13 01:30:16.998048 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 01:30:16.998054 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 01:30:16.998061 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 01:30:16.998067 kernel: GICv3: 960 SPIs implemented Sep 13 01:30:16.998074 kernel: GICv3: 0 Extended SPIs implemented Sep 13 01:30:16.998080 kernel: GICv3: Distributor has no Range Selector support Sep 13 01:30:16.998087 kernel: Root IRQ handler: gic_handle_irq Sep 13 01:30:16.998092 kernel: GICv3: 16 PPIs implemented Sep 13 01:30:16.998099 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 13 01:30:16.998105 kernel: ITS: No ITS available, not enabling LPIs Sep 13 01:30:16.998111 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 01:30:16.998117 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 01:30:16.998123 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 01:30:16.998130 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 01:30:16.998136 kernel: Console: colour dummy device 80x25 Sep 13 01:30:16.998144 kernel: printk: console [tty1] enabled Sep 13 01:30:16.998150 kernel: ACPI: Core revision 20210730 Sep 13 01:30:16.998157 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 01:30:16.998163 kernel: pid_max: default: 32768 minimum: 301 Sep 13 01:30:16.998169 kernel: LSM: Security Framework initializing Sep 13 01:30:16.998176 kernel: SELinux: Initializing. Sep 13 01:30:16.998182 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 01:30:16.998189 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 01:30:16.998195 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 13 01:30:16.998203 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 13 01:30:16.998209 kernel: rcu: Hierarchical SRCU implementation. Sep 13 01:30:16.998216 kernel: Remapping and enabling EFI services. Sep 13 01:30:16.998222 kernel: smp: Bringing up secondary CPUs ... Sep 13 01:30:16.998228 kernel: Detected PIPT I-cache on CPU1 Sep 13 01:30:16.998235 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 13 01:30:16.998241 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 01:30:16.998248 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 01:30:16.998254 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 01:30:16.998260 kernel: SMP: Total of 2 processors activated. Sep 13 01:30:16.998268 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 01:30:16.998275 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 13 01:30:16.998282 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 01:30:16.998288 kernel: CPU features: detected: CRC32 instructions Sep 13 01:30:16.998294 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 01:30:16.998301 kernel: CPU features: detected: LSE atomic instructions Sep 13 01:30:16.998307 kernel: CPU features: detected: Privileged Access Never Sep 13 01:30:16.998313 kernel: CPU: All CPU(s) started at EL1 Sep 13 01:30:16.998319 kernel: alternatives: patching kernel code Sep 13 01:30:16.998327 kernel: devtmpfs: initialized Sep 13 01:30:16.998338 kernel: KASLR enabled Sep 13 01:30:16.998345 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 01:30:16.998353 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 01:30:16.998359 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 01:30:16.998366 kernel: SMBIOS 3.1.0 present. Sep 13 01:30:16.998373 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 13 01:30:16.998379 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 01:30:16.998386 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 01:30:16.998394 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 01:30:16.998401 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 01:30:16.998408 kernel: audit: initializing netlink subsys (disabled) Sep 13 01:30:16.998415 kernel: audit: type=2000 audit(0.088:1): state=initialized audit_enabled=0 res=1 Sep 13 01:30:16.998421 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 01:30:16.998428 kernel: cpuidle: using governor menu Sep 13 01:30:16.998434 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 01:30:16.998442 kernel: ASID allocator initialised with 32768 entries Sep 13 01:30:16.998449 kernel: ACPI: bus type PCI registered Sep 13 01:30:16.998456 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 01:30:16.998462 kernel: Serial: AMBA PL011 UART driver Sep 13 01:30:16.998469 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 01:30:16.998475 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 01:30:16.998482 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 01:30:16.998489 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 01:30:16.998496 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 01:30:16.998503 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 01:30:16.998510 kernel: ACPI: Added _OSI(Module Device) Sep 13 01:30:16.998517 kernel: ACPI: Added _OSI(Processor Device) Sep 13 01:30:16.998523 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 01:30:16.998530 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 01:30:16.998536 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 01:30:16.998543 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 01:30:16.998550 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 01:30:16.998556 kernel: ACPI: Interpreter enabled Sep 13 01:30:16.998564 kernel: ACPI: Using GIC for interrupt routing Sep 13 01:30:16.998571 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 13 01:30:16.998593 kernel: printk: console [ttyAMA0] enabled Sep 13 01:30:16.998600 kernel: printk: bootconsole [pl11] disabled Sep 13 01:30:16.998607 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 13 01:30:16.998614 kernel: iommu: Default domain type: Translated Sep 13 01:30:16.998620 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 01:30:16.998627 kernel: vgaarb: loaded Sep 13 01:30:16.998633 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 01:30:16.998640 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 01:30:16.998648 kernel: PTP clock support registered Sep 13 01:30:16.998655 kernel: Registered efivars operations Sep 13 01:30:16.998662 kernel: No ACPI PMU IRQ for CPU0 Sep 13 01:30:16.998668 kernel: No ACPI PMU IRQ for CPU1 Sep 13 01:30:16.998675 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 01:30:16.998681 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 01:30:16.998688 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 01:30:16.998694 kernel: pnp: PnP ACPI init Sep 13 01:30:16.998701 kernel: pnp: PnP ACPI: found 0 devices Sep 13 01:30:16.998709 kernel: NET: Registered PF_INET protocol family Sep 13 01:30:16.998715 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 01:30:16.998722 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 01:30:16.998729 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 01:30:16.998735 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 01:30:16.998742 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 01:30:16.998749 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 01:30:16.998756 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 01:30:16.998763 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 01:30:16.998770 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 01:30:16.998777 kernel: PCI: CLS 0 bytes, default 64 Sep 13 01:30:16.998783 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 13 01:30:16.998790 kernel: kvm [1]: HYP mode not available Sep 13 01:30:16.998797 kernel: Initialise system trusted keyrings Sep 13 01:30:16.998803 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 01:30:16.998809 kernel: Key type asymmetric registered Sep 13 01:30:16.998816 kernel: Asymmetric key parser 'x509' registered Sep 13 01:30:16.998824 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 01:30:16.998830 kernel: io scheduler mq-deadline registered Sep 13 01:30:16.998837 kernel: io scheduler kyber registered Sep 13 01:30:16.998844 kernel: io scheduler bfq registered Sep 13 01:30:16.998850 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 01:30:16.998857 kernel: thunder_xcv, ver 1.0 Sep 13 01:30:16.998863 kernel: thunder_bgx, ver 1.0 Sep 13 01:30:16.998869 kernel: nicpf, ver 1.0 Sep 13 01:30:16.998876 kernel: nicvf, ver 1.0 Sep 13 01:30:16.998999 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 01:30:16.999062 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T01:30:16 UTC (1757727016) Sep 13 01:30:16.999071 kernel: efifb: probing for efifb Sep 13 01:30:16.999078 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 13 01:30:16.999085 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 13 01:30:16.999091 kernel: efifb: scrolling: redraw Sep 13 01:30:16.999098 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 01:30:16.999104 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 01:30:16.999112 kernel: fb0: EFI VGA frame buffer device Sep 13 01:30:16.999120 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 13 01:30:16.999126 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 01:30:16.999133 kernel: NET: Registered PF_INET6 protocol family Sep 13 01:30:16.999139 kernel: Segment Routing with IPv6 Sep 13 01:30:16.999146 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 01:30:16.999152 kernel: NET: Registered PF_PACKET protocol family Sep 13 01:30:16.999159 kernel: Key type dns_resolver registered Sep 13 01:30:16.999166 kernel: registered taskstats version 1 Sep 13 01:30:16.999172 kernel: Loading compiled-in X.509 certificates Sep 13 01:30:16.999180 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 47ac98e9306f36eebe4291d409359a5a5d0c2b9c' Sep 13 01:30:16.999187 kernel: Key type .fscrypt registered Sep 13 01:30:16.999193 kernel: Key type fscrypt-provisioning registered Sep 13 01:30:16.999200 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 01:30:16.999207 kernel: ima: Allocated hash algorithm: sha1 Sep 13 01:30:16.999213 kernel: ima: No architecture policies found Sep 13 01:30:16.999220 kernel: clk: Disabling unused clocks Sep 13 01:30:16.999226 kernel: Freeing unused kernel memory: 36416K Sep 13 01:30:16.999234 kernel: Run /init as init process Sep 13 01:30:16.999241 kernel: with arguments: Sep 13 01:30:16.999247 kernel: /init Sep 13 01:30:16.999253 kernel: with environment: Sep 13 01:30:16.999260 kernel: HOME=/ Sep 13 01:30:16.999266 kernel: TERM=linux Sep 13 01:30:16.999273 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 01:30:16.999281 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 01:30:16.999292 systemd[1]: Detected virtualization microsoft. Sep 13 01:30:16.999299 systemd[1]: Detected architecture arm64. Sep 13 01:30:16.999306 systemd[1]: Running in initrd. Sep 13 01:30:16.999313 systemd[1]: No hostname configured, using default hostname. Sep 13 01:30:16.999320 systemd[1]: Hostname set to . Sep 13 01:30:16.999327 systemd[1]: Initializing machine ID from random generator. Sep 13 01:30:16.999334 systemd[1]: Queued start job for default target initrd.target. Sep 13 01:30:16.999341 systemd[1]: Started systemd-ask-password-console.path. Sep 13 01:30:16.999349 systemd[1]: Reached target cryptsetup.target. Sep 13 01:30:16.999356 systemd[1]: Reached target paths.target. Sep 13 01:30:16.999363 systemd[1]: Reached target slices.target. Sep 13 01:30:16.999370 systemd[1]: Reached target swap.target. Sep 13 01:30:16.999377 systemd[1]: Reached target timers.target. Sep 13 01:30:16.999385 systemd[1]: Listening on iscsid.socket. Sep 13 01:30:16.999392 systemd[1]: Listening on iscsiuio.socket. Sep 13 01:30:16.999399 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 01:30:16.999408 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 01:30:16.999415 systemd[1]: Listening on systemd-journald.socket. Sep 13 01:30:16.999422 systemd[1]: Listening on systemd-networkd.socket. Sep 13 01:30:16.999429 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 01:30:16.999436 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 01:30:16.999443 systemd[1]: Reached target sockets.target. Sep 13 01:30:16.999450 systemd[1]: Starting kmod-static-nodes.service... Sep 13 01:30:16.999457 systemd[1]: Finished network-cleanup.service. Sep 13 01:30:16.999464 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 01:30:16.999473 systemd[1]: Starting systemd-journald.service... Sep 13 01:30:16.999480 systemd[1]: Starting systemd-modules-load.service... Sep 13 01:30:16.999487 systemd[1]: Starting systemd-resolved.service... Sep 13 01:30:16.999498 systemd-journald[276]: Journal started Sep 13 01:30:16.999536 systemd-journald[276]: Runtime Journal (/run/log/journal/db9eff30a27d43cc851f6814f77739af) is 8.0M, max 78.5M, 70.5M free. Sep 13 01:30:16.993474 systemd-modules-load[277]: Inserted module 'overlay' Sep 13 01:30:17.029593 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 01:30:17.029639 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 01:30:17.036962 systemd-modules-load[277]: Inserted module 'br_netfilter' Sep 13 01:30:17.046340 kernel: Bridge firewalling registered Sep 13 01:30:17.046360 systemd[1]: Started systemd-journald.service. Sep 13 01:30:17.047687 systemd-resolved[278]: Positive Trust Anchors: Sep 13 01:30:17.047702 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 01:30:17.089530 kernel: SCSI subsystem initialized Sep 13 01:30:17.089557 kernel: audit: type=1130 audit(1757727017.069:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.047729 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 01:30:17.148336 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 01:30:17.148357 kernel: device-mapper: uevent: version 1.0.3 Sep 13 01:30:17.148366 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 01:30:17.049840 systemd-resolved[278]: Defaulting to hostname 'linux'. Sep 13 01:30:17.172374 kernel: audit: type=1130 audit(1757727017.150:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.069417 systemd[1]: Started systemd-resolved.service. Sep 13 01:30:17.195714 kernel: audit: type=1130 audit(1757727017.176:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.150982 systemd-modules-load[277]: Inserted module 'dm_multipath' Sep 13 01:30:17.222862 kernel: audit: type=1130 audit(1757727017.201:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.151825 systemd[1]: Finished kmod-static-nodes.service. Sep 13 01:30:17.176611 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 01:30:17.256927 kernel: audit: type=1130 audit(1757727017.227:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.202184 systemd[1]: Finished systemd-modules-load.service. Sep 13 01:30:17.281931 kernel: audit: type=1130 audit(1757727017.254:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.228031 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 01:30:17.254848 systemd[1]: Reached target nss-lookup.target. Sep 13 01:30:17.281823 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 01:30:17.286988 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:30:17.302512 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 01:30:17.329127 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 01:30:17.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.359828 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:30:17.386658 kernel: audit: type=1130 audit(1757727017.333:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.386682 kernel: audit: type=1130 audit(1757727017.364:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.365149 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 01:30:17.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.391604 systemd[1]: Starting dracut-cmdline.service... Sep 13 01:30:17.424671 kernel: audit: type=1130 audit(1757727017.390:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.424693 dracut-cmdline[299]: dracut-dracut-053 Sep 13 01:30:17.424693 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=t Sep 13 01:30:17.424693 dracut-cmdline[299]: tyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 01:30:17.473624 kernel: Loading iSCSI transport class v2.0-870. Sep 13 01:30:17.495594 kernel: iscsi: registered transport (tcp) Sep 13 01:30:17.510967 kernel: iscsi: registered transport (qla4xxx) Sep 13 01:30:17.511046 kernel: QLogic iSCSI HBA Driver Sep 13 01:30:17.540749 systemd[1]: Finished dracut-cmdline.service. Sep 13 01:30:17.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:17.546255 systemd[1]: Starting dracut-pre-udev.service... Sep 13 01:30:17.598596 kernel: raid6: neonx8 gen() 13740 MB/s Sep 13 01:30:17.620586 kernel: raid6: neonx8 xor() 10812 MB/s Sep 13 01:30:17.640584 kernel: raid6: neonx4 gen() 13520 MB/s Sep 13 01:30:17.660584 kernel: raid6: neonx4 xor() 11245 MB/s Sep 13 01:30:17.682584 kernel: raid6: neonx2 gen() 12944 MB/s Sep 13 01:30:17.703583 kernel: raid6: neonx2 xor() 10251 MB/s Sep 13 01:30:17.723584 kernel: raid6: neonx1 gen() 10659 MB/s Sep 13 01:30:17.744584 kernel: raid6: neonx1 xor() 8790 MB/s Sep 13 01:30:17.765584 kernel: raid6: int64x8 gen() 6272 MB/s Sep 13 01:30:17.785584 kernel: raid6: int64x8 xor() 3544 MB/s Sep 13 01:30:17.806589 kernel: raid6: int64x4 gen() 7207 MB/s Sep 13 01:30:17.826584 kernel: raid6: int64x4 xor() 3856 MB/s Sep 13 01:30:17.846583 kernel: raid6: int64x2 gen() 6156 MB/s Sep 13 01:30:17.867584 kernel: raid6: int64x2 xor() 3320 MB/s Sep 13 01:30:17.887584 kernel: raid6: int64x1 gen() 5047 MB/s Sep 13 01:30:17.912541 kernel: raid6: int64x1 xor() 2648 MB/s Sep 13 01:30:17.912551 kernel: raid6: using algorithm neonx8 gen() 13740 MB/s Sep 13 01:30:17.912559 kernel: raid6: .... xor() 10812 MB/s, rmw enabled Sep 13 01:30:17.917012 kernel: raid6: using neon recovery algorithm Sep 13 01:30:17.936588 kernel: xor: measuring software checksum speed Sep 13 01:30:17.944438 kernel: 8regs : 16293 MB/sec Sep 13 01:30:17.944448 kernel: 32regs : 20691 MB/sec Sep 13 01:30:17.948443 kernel: arm64_neon : 27775 MB/sec Sep 13 01:30:17.953934 kernel: xor: using function: arm64_neon (27775 MB/sec) Sep 13 01:30:18.010596 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 13 01:30:18.020073 systemd[1]: Finished dracut-pre-udev.service. Sep 13 01:30:18.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:18.029000 audit: BPF prog-id=7 op=LOAD Sep 13 01:30:18.029000 audit: BPF prog-id=8 op=LOAD Sep 13 01:30:18.030110 systemd[1]: Starting systemd-udevd.service... Sep 13 01:30:18.048373 systemd-udevd[476]: Using default interface naming scheme 'v252'. Sep 13 01:30:18.054827 systemd[1]: Started systemd-udevd.service. Sep 13 01:30:18.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:18.065547 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 01:30:18.075973 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Sep 13 01:30:18.105987 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 01:30:18.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:18.112515 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 01:30:18.145614 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 01:30:18.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:18.202601 kernel: hv_vmbus: Vmbus version:5.3 Sep 13 01:30:18.212605 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 13 01:30:18.228536 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 13 01:30:18.228599 kernel: hv_vmbus: registering driver hid_hyperv Sep 13 01:30:18.234590 kernel: hv_vmbus: registering driver hv_netvsc Sep 13 01:30:18.247242 kernel: hv_vmbus: registering driver hv_storvsc Sep 13 01:30:18.247296 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 13 01:30:18.247306 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 13 01:30:18.264528 kernel: scsi host1: storvsc_host_t Sep 13 01:30:18.264694 kernel: scsi host0: storvsc_host_t Sep 13 01:30:18.271849 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 13 01:30:18.279597 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 13 01:30:18.297275 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 13 01:30:18.304788 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 01:30:18.304801 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 13 01:30:18.333571 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 13 01:30:18.333696 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 13 01:30:18.333799 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 01:30:18.333881 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 13 01:30:18.333970 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 13 01:30:18.334063 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:30:18.334072 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 01:30:18.382334 kernel: hv_netvsc 0022487a-600d-0022-487a-600d0022487a eth0: VF slot 1 added Sep 13 01:30:18.391433 kernel: hv_vmbus: registering driver hv_pci Sep 13 01:30:18.400131 kernel: hv_pci b32e1882-1214-442f-9acd-977dcc96708e: PCI VMBus probing: Using version 0x10004 Sep 13 01:30:18.682902 kernel: hv_pci b32e1882-1214-442f-9acd-977dcc96708e: PCI host bridge to bus 1214:00 Sep 13 01:30:18.683015 kernel: pci_bus 1214:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 13 01:30:18.683111 kernel: pci_bus 1214:00: No busn resource found for root bus, will use [bus 00-ff] Sep 13 01:30:18.683181 kernel: pci 1214:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 13 01:30:18.683270 kernel: pci 1214:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 13 01:30:18.683346 kernel: pci 1214:00:02.0: enabling Extended Tags Sep 13 01:30:18.683421 kernel: pci 1214:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1214:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 13 01:30:18.683498 kernel: pci_bus 1214:00: busn_res: [bus 00-ff] end is updated to 00 Sep 13 01:30:18.683567 kernel: pci 1214:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 13 01:30:18.720617 kernel: mlx5_core 1214:00:02.0: enabling device (0000 -> 0002) Sep 13 01:30:19.027038 kernel: mlx5_core 1214:00:02.0: firmware version: 16.31.2424 Sep 13 01:30:19.027190 kernel: mlx5_core 1214:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Sep 13 01:30:19.027275 kernel: hv_netvsc 0022487a-600d-0022-487a-600d0022487a eth0: VF registering: eth1 Sep 13 01:30:19.027360 kernel: mlx5_core 1214:00:02.0 eth1: joined to eth0 Sep 13 01:30:19.034592 kernel: mlx5_core 1214:00:02.0 enP4628s1: renamed from eth1 Sep 13 01:30:19.055180 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 01:30:19.062234 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (539) Sep 13 01:30:19.078175 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 01:30:19.320068 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 01:30:19.326176 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 01:30:19.337332 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 01:30:19.350260 systemd[1]: Starting disk-uuid.service... Sep 13 01:30:19.378602 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:30:20.394451 disk-uuid[605]: The operation has completed successfully. Sep 13 01:30:20.399756 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:30:20.460835 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 01:30:20.465733 systemd[1]: Finished disk-uuid.service. Sep 13 01:30:20.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:20.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:20.475207 systemd[1]: Starting verity-setup.service... Sep 13 01:30:20.515602 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 01:30:20.936356 systemd[1]: Found device dev-mapper-usr.device. Sep 13 01:30:20.943328 systemd[1]: Mounting sysusr-usr.mount... Sep 13 01:30:20.955757 systemd[1]: Finished verity-setup.service. Sep 13 01:30:20.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:21.017599 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 01:30:21.018322 systemd[1]: Mounted sysusr-usr.mount. Sep 13 01:30:21.022442 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 01:30:21.023215 systemd[1]: Starting ignition-setup.service... Sep 13 01:30:21.043645 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 01:30:21.071005 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 01:30:21.071068 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:30:21.075726 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:30:21.119172 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 01:30:21.157809 kernel: kauditd_printk_skb: 10 callbacks suppressed Sep 13 01:30:21.157832 kernel: audit: type=1130 audit(1757727021.123:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:21.157842 kernel: audit: type=1334 audit(1757727021.128:22): prog-id=9 op=LOAD Sep 13 01:30:21.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:21.128000 audit: BPF prog-id=9 op=LOAD Sep 13 01:30:21.129336 systemd[1]: Starting systemd-networkd.service... Sep 13 01:30:21.180161 systemd-networkd[869]: lo: Link UP Sep 13 01:30:21.180169 systemd-networkd[869]: lo: Gained carrier Sep 13 01:30:21.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:21.180587 systemd-networkd[869]: Enumeration completed Sep 13 01:30:21.222617 kernel: audit: type=1130 audit(1757727021.189:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:21.180924 systemd[1]: Started systemd-networkd.service. Sep 13 01:30:21.189632 systemd[1]: Reached target network.target. Sep 13 01:30:21.213442 systemd-networkd[869]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:30:21.267359 kernel: audit: type=1130 audit(1757727021.238:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:21.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:21.218936 systemd[1]: Starting iscsiuio.service... Sep 13 01:30:21.301690 kernel: audit: type=1130 audit(1757727021.271:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:21.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:21.301760 iscsid[876]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 01:30:21.301760 iscsid[876]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 13 01:30:21.301760 iscsid[876]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 01:30:21.301760 iscsid[876]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 01:30:21.301760 iscsid[876]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 01:30:21.301760 iscsid[876]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 01:30:21.301760 iscsid[876]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 01:30:21.405696 kernel: audit: type=1130 audit(1757727021.332:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:21.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:21.230452 systemd[1]: Started iscsiuio.service. Sep 13 01:30:21.435638 kernel: mlx5_core 1214:00:02.0 enP4628s1: Link up Sep 13 01:30:21.435815 kernel: audit: type=1130 audit(1757727021.409:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:21.435827 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 01:30:21.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:21.239467 systemd[1]: Starting iscsid.service... Sep 13 01:30:21.263618 systemd[1]: Started iscsid.service. Sep 13 01:30:21.286110 systemd[1]: Starting dracut-initqueue.service... Sep 13 01:30:21.312426 systemd[1]: Finished dracut-initqueue.service. Sep 13 01:30:21.332738 systemd[1]: Reached target remote-fs-pre.target. Sep 13 01:30:21.365363 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 01:30:21.370657 systemd[1]: Reached target remote-fs.target. Sep 13 01:30:21.385238 systemd[1]: Starting dracut-pre-mount.service... Sep 13 01:30:21.401092 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 01:30:21.401514 systemd[1]: Finished dracut-pre-mount.service. Sep 13 01:30:21.532593 kernel: hv_netvsc 0022487a-600d-0022-487a-600d0022487a eth0: Data path switched to VF: enP4628s1 Sep 13 01:30:21.532777 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 01:30:21.538464 systemd-networkd[869]: enP4628s1: Link UP Sep 13 01:30:21.538552 systemd-networkd[869]: eth0: Link UP Sep 13 01:30:21.538701 systemd-networkd[869]: eth0: Gained carrier Sep 13 01:30:21.549742 systemd-networkd[869]: enP4628s1: Gained carrier Sep 13 01:30:21.558642 systemd-networkd[869]: eth0: DHCPv4 address 10.200.20.47/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 01:30:21.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:21.570169 systemd[1]: Finished ignition-setup.service. Sep 13 01:30:21.596006 kernel: audit: type=1130 audit(1757727021.574:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:21.596277 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 01:30:23.121694 systemd-networkd[869]: eth0: Gained IPv6LL Sep 13 01:30:25.553222 ignition[896]: Ignition 2.14.0 Sep 13 01:30:25.553234 ignition[896]: Stage: fetch-offline Sep 13 01:30:25.553289 ignition[896]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:30:25.553315 ignition[896]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:30:25.663779 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:30:25.663921 ignition[896]: parsed url from cmdline: "" Sep 13 01:30:25.663924 ignition[896]: no config URL provided Sep 13 01:30:25.663930 ignition[896]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 01:30:25.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:25.673793 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 01:30:25.707670 kernel: audit: type=1130 audit(1757727025.679:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:25.663937 ignition[896]: no config at "/usr/lib/ignition/user.ign" Sep 13 01:30:25.701974 systemd[1]: Starting ignition-fetch.service... Sep 13 01:30:25.663942 ignition[896]: failed to fetch config: resource requires networking Sep 13 01:30:25.664167 ignition[896]: Ignition finished successfully Sep 13 01:30:25.713194 ignition[902]: Ignition 2.14.0 Sep 13 01:30:25.713200 ignition[902]: Stage: fetch Sep 13 01:30:25.713308 ignition[902]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:30:25.713325 ignition[902]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:30:25.715960 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:30:25.716408 ignition[902]: parsed url from cmdline: "" Sep 13 01:30:25.716413 ignition[902]: no config URL provided Sep 13 01:30:25.716420 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 01:30:25.716436 ignition[902]: no config at "/usr/lib/ignition/user.ign" Sep 13 01:30:25.716471 ignition[902]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 13 01:30:25.837702 ignition[902]: GET result: OK Sep 13 01:30:25.837776 ignition[902]: config has been read from IMDS userdata Sep 13 01:30:25.840666 unknown[902]: fetched base config from "system" Sep 13 01:30:25.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:25.837815 ignition[902]: parsing config with SHA512: 273b3475f2901a2315d3e73ffe43aba228dda0be249beb9e2390cc941ad3231b270b8a5e5bb599a62223f790b0f9598ec30d2d3b557b4d42c24c55de3cf6244b Sep 13 01:30:25.840673 unknown[902]: fetched base config from "system" Sep 13 01:30:25.877631 kernel: audit: type=1130 audit(1757727025.849:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:25.841189 ignition[902]: fetch: fetch complete Sep 13 01:30:25.840679 unknown[902]: fetched user config from "azure" Sep 13 01:30:25.841193 ignition[902]: fetch: fetch passed Sep 13 01:30:25.845831 systemd[1]: Finished ignition-fetch.service. Sep 13 01:30:25.841233 ignition[902]: Ignition finished successfully Sep 13 01:30:25.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:25.851165 systemd[1]: Starting ignition-kargs.service... Sep 13 01:30:25.880639 ignition[908]: Ignition 2.14.0 Sep 13 01:30:25.889026 systemd[1]: Finished ignition-kargs.service. Sep 13 01:30:25.880645 ignition[908]: Stage: kargs Sep 13 01:30:25.894530 systemd[1]: Starting ignition-disks.service... Sep 13 01:30:25.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:25.880787 ignition[908]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:30:25.917512 systemd[1]: Finished ignition-disks.service. Sep 13 01:30:25.880805 ignition[908]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:30:25.924975 systemd[1]: Reached target initrd-root-device.target. Sep 13 01:30:25.885160 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:30:25.933602 systemd[1]: Reached target local-fs-pre.target. Sep 13 01:30:25.886870 ignition[908]: kargs: kargs passed Sep 13 01:30:25.946364 systemd[1]: Reached target local-fs.target. Sep 13 01:30:25.886927 ignition[908]: Ignition finished successfully Sep 13 01:30:25.955143 systemd[1]: Reached target sysinit.target. Sep 13 01:30:25.908273 ignition[914]: Ignition 2.14.0 Sep 13 01:30:25.963981 systemd[1]: Reached target basic.target. Sep 13 01:30:25.908279 ignition[914]: Stage: disks Sep 13 01:30:25.974942 systemd[1]: Starting systemd-fsck-root.service... Sep 13 01:30:25.908388 ignition[914]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:30:25.908406 ignition[914]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:30:25.911154 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:30:25.913523 ignition[914]: disks: disks passed Sep 13 01:30:25.913622 ignition[914]: Ignition finished successfully Sep 13 01:30:26.058361 systemd-fsck[922]: ROOT: clean, 629/7326000 files, 481083/7359488 blocks Sep 13 01:30:26.065678 systemd[1]: Finished systemd-fsck-root.service. Sep 13 01:30:26.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:26.070918 systemd[1]: Mounting sysroot.mount... Sep 13 01:30:26.115594 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 01:30:26.115818 systemd[1]: Mounted sysroot.mount. Sep 13 01:30:26.119623 systemd[1]: Reached target initrd-root-fs.target. Sep 13 01:30:26.158939 systemd[1]: Mounting sysroot-usr.mount... Sep 13 01:30:26.163626 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 13 01:30:26.171120 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 01:30:26.171152 systemd[1]: Reached target ignition-diskful.target. Sep 13 01:30:26.177008 systemd[1]: Mounted sysroot-usr.mount. Sep 13 01:30:26.257968 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 01:30:26.263288 systemd[1]: Starting initrd-setup-root.service... Sep 13 01:30:26.287602 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (933) Sep 13 01:30:26.299096 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 01:30:26.299127 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:30:26.304059 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:30:26.304170 initrd-setup-root[938]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 01:30:26.318700 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 01:30:26.348988 initrd-setup-root[964]: cut: /sysroot/etc/group: No such file or directory Sep 13 01:30:26.374592 initrd-setup-root[972]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 01:30:26.439054 initrd-setup-root[980]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 01:30:27.228426 systemd[1]: Finished initrd-setup-root.service. Sep 13 01:30:27.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:27.234066 systemd[1]: Starting ignition-mount.service... Sep 13 01:30:27.273937 kernel: kauditd_printk_skb: 3 callbacks suppressed Sep 13 01:30:27.273957 kernel: audit: type=1130 audit(1757727027.232:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:27.264647 systemd[1]: Starting sysroot-boot.service... Sep 13 01:30:27.273818 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 13 01:30:27.273925 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 13 01:30:27.305992 systemd[1]: Finished sysroot-boot.service. Sep 13 01:30:27.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:27.331602 kernel: audit: type=1130 audit(1757727027.310:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:27.537724 ignition[1003]: INFO : Ignition 2.14.0 Sep 13 01:30:27.537724 ignition[1003]: INFO : Stage: mount Sep 13 01:30:27.551662 ignition[1003]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:30:27.551662 ignition[1003]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:30:27.551662 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:30:27.551662 ignition[1003]: INFO : mount: mount passed Sep 13 01:30:27.551662 ignition[1003]: INFO : Ignition finished successfully Sep 13 01:30:27.607325 kernel: audit: type=1130 audit(1757727027.551:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:27.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:27.547174 systemd[1]: Finished ignition-mount.service. Sep 13 01:30:27.957379 coreos-metadata[932]: Sep 13 01:30:27.957 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 13 01:30:27.968508 coreos-metadata[932]: Sep 13 01:30:27.968 INFO Fetch successful Sep 13 01:30:28.002787 coreos-metadata[932]: Sep 13 01:30:28.002 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 13 01:30:28.015660 coreos-metadata[932]: Sep 13 01:30:28.015 INFO Fetch successful Sep 13 01:30:28.057633 coreos-metadata[932]: Sep 13 01:30:28.057 INFO wrote hostname ci-3510.3.8-n-a3199d6d1b to /sysroot/etc/hostname Sep 13 01:30:28.067296 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 13 01:30:28.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:28.094241 systemd[1]: Starting ignition-files.service... Sep 13 01:30:28.104356 kernel: audit: type=1130 audit(1757727028.072:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:28.105190 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 01:30:28.128605 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1011) Sep 13 01:30:28.144097 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 01:30:28.144114 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:30:28.144123 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:30:28.157800 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 01:30:28.174903 ignition[1030]: INFO : Ignition 2.14.0 Sep 13 01:30:28.174903 ignition[1030]: INFO : Stage: files Sep 13 01:30:28.184814 ignition[1030]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:30:28.184814 ignition[1030]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:30:28.184814 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:30:28.184814 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Sep 13 01:30:28.217559 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 01:30:28.217559 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 01:30:28.327111 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 01:30:28.335322 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 01:30:28.343953 unknown[1030]: wrote ssh authorized keys file for user: core Sep 13 01:30:28.349323 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 01:30:28.360989 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 13 01:30:28.372166 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 13 01:30:28.433482 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 01:30:28.568463 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 13 01:30:28.585415 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 01:30:28.595465 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 13 01:30:28.885608 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 01:30:29.084770 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 01:30:29.094803 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 01:30:29.094803 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 01:30:29.094803 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 01:30:29.094803 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 01:30:29.094803 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 01:30:29.094803 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 01:30:29.094803 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 01:30:29.094803 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 01:30:29.173559 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 01:30:29.173559 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 01:30:29.173559 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 01:30:29.173559 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 01:30:29.173559 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 13 01:30:29.173559 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 01:30:29.173559 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1647775749" Sep 13 01:30:29.173559 ignition[1030]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1647775749": device or resource busy Sep 13 01:30:29.173559 ignition[1030]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1647775749", trying btrfs: device or resource busy Sep 13 01:30:29.173559 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1647775749" Sep 13 01:30:29.296945 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1647775749" Sep 13 01:30:29.296945 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1647775749" Sep 13 01:30:29.296945 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1647775749" Sep 13 01:30:29.296945 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 13 01:30:29.296945 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 01:30:29.296945 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 01:30:29.296945 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem669613442" Sep 13 01:30:29.296945 ignition[1030]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem669613442": device or resource busy Sep 13 01:30:29.296945 ignition[1030]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem669613442", trying btrfs: device or resource busy Sep 13 01:30:29.296945 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem669613442" Sep 13 01:30:29.296945 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem669613442" Sep 13 01:30:29.296945 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem669613442" Sep 13 01:30:29.296945 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem669613442" Sep 13 01:30:29.296945 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 01:30:29.296945 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 01:30:29.184605 systemd[1]: mnt-oem1647775749.mount: Deactivated successfully. Sep 13 01:30:29.464995 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 13 01:30:29.233993 systemd[1]: mnt-oem669613442.mount: Deactivated successfully. Sep 13 01:30:29.704715 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Sep 13 01:30:29.951434 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 01:30:29.969013 ignition[1030]: INFO : files: op(14): [started] processing unit "waagent.service" Sep 13 01:30:29.969013 ignition[1030]: INFO : files: op(14): [finished] processing unit "waagent.service" Sep 13 01:30:29.969013 ignition[1030]: INFO : files: op(15): [started] processing unit "nvidia.service" Sep 13 01:30:29.969013 ignition[1030]: INFO : files: op(15): [finished] processing unit "nvidia.service" Sep 13 01:30:29.969013 ignition[1030]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Sep 13 01:30:29.969013 ignition[1030]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 01:30:29.969013 ignition[1030]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 01:30:29.969013 ignition[1030]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Sep 13 01:30:29.969013 ignition[1030]: INFO : files: op(18): [started] setting preset to enabled for "waagent.service" Sep 13 01:30:29.969013 ignition[1030]: INFO : files: op(18): [finished] setting preset to enabled for "waagent.service" Sep 13 01:30:29.969013 ignition[1030]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Sep 13 01:30:29.969013 ignition[1030]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Sep 13 01:30:29.969013 ignition[1030]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Sep 13 01:30:29.969013 ignition[1030]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 01:30:29.969013 ignition[1030]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 01:30:29.969013 ignition[1030]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 01:30:29.969013 ignition[1030]: INFO : files: files passed Sep 13 01:30:29.969013 ignition[1030]: INFO : Ignition finished successfully Sep 13 01:30:30.261704 kernel: audit: type=1130 audit(1757727029.992:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.261734 kernel: audit: type=1130 audit(1757727030.054:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.261746 kernel: audit: type=1131 audit(1757727030.054:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.261757 kernel: audit: type=1130 audit(1757727030.101:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.261767 kernel: audit: type=1130 audit(1757727030.171:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.261776 kernel: audit: type=1131 audit(1757727030.171:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:29.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:29.977645 systemd[1]: Finished ignition-files.service. Sep 13 01:30:29.996533 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 01:30:30.020461 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 01:30:30.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.290376 initrd-setup-root-after-ignition[1055]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 01:30:30.021432 systemd[1]: Starting ignition-quench.service... Sep 13 01:30:30.041331 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 01:30:30.041903 systemd[1]: Finished ignition-quench.service. Sep 13 01:30:30.055313 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 01:30:30.101804 systemd[1]: Reached target ignition-complete.target. Sep 13 01:30:30.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.137440 systemd[1]: Starting initrd-parse-etc.service... Sep 13 01:30:30.167488 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 01:30:30.167596 systemd[1]: Finished initrd-parse-etc.service. Sep 13 01:30:30.172799 systemd[1]: Reached target initrd-fs.target. Sep 13 01:30:30.215123 systemd[1]: Reached target initrd.target. Sep 13 01:30:30.227319 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 01:30:30.236587 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 01:30:30.266725 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 01:30:30.276919 systemd[1]: Starting initrd-cleanup.service... Sep 13 01:30:30.301358 systemd[1]: Stopped target nss-lookup.target. Sep 13 01:30:30.307697 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 01:30:30.317432 systemd[1]: Stopped target timers.target. Sep 13 01:30:30.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.325385 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 01:30:30.325491 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 01:30:30.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.335039 systemd[1]: Stopped target initrd.target. Sep 13 01:30:30.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.343719 systemd[1]: Stopped target basic.target. Sep 13 01:30:30.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.351681 systemd[1]: Stopped target ignition-complete.target. Sep 13 01:30:30.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.361548 systemd[1]: Stopped target ignition-diskful.target. Sep 13 01:30:30.370240 systemd[1]: Stopped target initrd-root-device.target. Sep 13 01:30:30.378977 systemd[1]: Stopped target remote-fs.target. Sep 13 01:30:30.508938 ignition[1068]: INFO : Ignition 2.14.0 Sep 13 01:30:30.508938 ignition[1068]: INFO : Stage: umount Sep 13 01:30:30.508938 ignition[1068]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:30:30.508938 ignition[1068]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:30:30.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.387033 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 01:30:30.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.568895 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:30:30.568895 ignition[1068]: INFO : umount: umount passed Sep 13 01:30:30.568895 ignition[1068]: INFO : Ignition finished successfully Sep 13 01:30:30.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.398615 systemd[1]: Stopped target sysinit.target. Sep 13 01:30:30.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.406848 systemd[1]: Stopped target local-fs.target. Sep 13 01:30:30.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.414847 systemd[1]: Stopped target local-fs-pre.target. Sep 13 01:30:30.423304 systemd[1]: Stopped target swap.target. Sep 13 01:30:30.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.431001 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 01:30:30.431116 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 01:30:30.439997 systemd[1]: Stopped target cryptsetup.target. Sep 13 01:30:30.447999 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 01:30:30.448098 systemd[1]: Stopped dracut-initqueue.service. Sep 13 01:30:30.457423 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 01:30:30.457511 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 01:30:30.466549 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 01:30:30.466648 systemd[1]: Stopped ignition-files.service. Sep 13 01:30:30.474596 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 01:30:30.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.474680 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 13 01:30:30.483999 systemd[1]: Stopping ignition-mount.service... Sep 13 01:30:30.499332 systemd[1]: Stopping iscsiuio.service... Sep 13 01:30:30.516106 systemd[1]: Stopping sysroot-boot.service... Sep 13 01:30:30.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.527782 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 01:30:30.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.528012 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 01:30:30.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.756000 audit: BPF prog-id=6 op=UNLOAD Sep 13 01:30:30.542216 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 01:30:30.542357 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 01:30:30.549271 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 01:30:30.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.549396 systemd[1]: Stopped iscsiuio.service. Sep 13 01:30:30.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.569340 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 01:30:30.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.569885 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 01:30:30.569978 systemd[1]: Stopped ignition-mount.service. Sep 13 01:30:30.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.580409 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 01:30:30.580512 systemd[1]: Stopped ignition-disks.service. Sep 13 01:30:30.589260 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 01:30:30.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.589350 systemd[1]: Stopped ignition-kargs.service. Sep 13 01:30:30.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.598694 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 01:30:30.598777 systemd[1]: Stopped ignition-fetch.service. Sep 13 01:30:30.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.606746 systemd[1]: Stopped target network.target. Sep 13 01:30:30.615186 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 01:30:30.615291 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 01:30:30.624176 systemd[1]: Stopped target paths.target. Sep 13 01:30:30.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.631549 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 01:30:30.641607 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 01:30:30.929682 kernel: hv_netvsc 0022487a-600d-0022-487a-600d0022487a eth0: Data path switched from VF: enP4628s1 Sep 13 01:30:30.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.649821 systemd[1]: Stopped target slices.target. Sep 13 01:30:30.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.659207 systemd[1]: Stopped target sockets.target. Sep 13 01:30:30.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.671196 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 01:30:30.671271 systemd[1]: Closed iscsid.socket. Sep 13 01:30:30.686053 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 01:30:30.686128 systemd[1]: Closed iscsiuio.socket. Sep 13 01:30:30.694516 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 01:30:30.694621 systemd[1]: Stopped ignition-setup.service. Sep 13 01:30:30.702990 systemd[1]: Stopping systemd-networkd.service... Sep 13 01:30:30.713736 systemd[1]: Stopping systemd-resolved.service... Sep 13 01:30:30.722641 systemd-networkd[869]: eth0: DHCPv6 lease lost Sep 13 01:30:30.983000 audit: BPF prog-id=9 op=UNLOAD Sep 13 01:30:30.726648 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 01:30:30.726748 systemd[1]: Stopped systemd-networkd.service. Sep 13 01:30:30.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.737421 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 01:30:30.737525 systemd[1]: Stopped systemd-resolved.service. Sep 13 01:30:30.745905 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 01:30:30.745993 systemd[1]: Finished initrd-cleanup.service. Sep 13 01:30:30.757690 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 01:30:30.757730 systemd[1]: Closed systemd-networkd.socket. Sep 13 01:30:30.766040 systemd[1]: Stopping network-cleanup.service... Sep 13 01:30:30.779201 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 01:30:30.779265 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 01:30:30.784508 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 01:30:30.784561 systemd[1]: Stopped systemd-sysctl.service. Sep 13 01:30:30.796951 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 01:30:30.797000 systemd[1]: Stopped systemd-modules-load.service. Sep 13 01:30:30.803230 systemd[1]: Stopping systemd-udevd.service... Sep 13 01:30:31.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.813420 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 01:30:30.818264 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 01:30:30.818404 systemd[1]: Stopped systemd-udevd.service. Sep 13 01:30:31.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:30.823046 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 01:30:30.823093 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 01:30:30.831429 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 01:30:30.831473 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 01:30:30.841053 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 01:30:30.841101 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 01:30:30.849869 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 01:30:31.139978 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Sep 13 01:30:31.140011 iscsid[876]: iscsid shutting down. Sep 13 01:30:30.849911 systemd[1]: Stopped dracut-cmdline.service. Sep 13 01:30:30.859115 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 01:30:30.859157 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 01:30:30.877127 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 01:30:30.891855 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 01:30:30.891955 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 13 01:30:30.914507 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 01:30:30.914606 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 01:30:30.925159 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 01:30:30.925222 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 01:30:30.935414 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 01:30:30.935946 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 01:30:30.936034 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 01:30:30.988789 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 01:30:30.988901 systemd[1]: Stopped network-cleanup.service. Sep 13 01:30:31.058606 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 01:30:31.058718 systemd[1]: Stopped sysroot-boot.service. Sep 13 01:30:31.066938 systemd[1]: Reached target initrd-switch-root.target. Sep 13 01:30:31.082406 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 01:30:31.082475 systemd[1]: Stopped initrd-setup-root.service. Sep 13 01:30:31.092638 systemd[1]: Starting initrd-switch-root.service... Sep 13 01:30:31.109084 systemd[1]: Switching root. Sep 13 01:30:31.140631 systemd-journald[276]: Journal stopped Sep 13 01:30:49.211061 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 01:30:49.211080 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 01:30:49.211090 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 01:30:49.211100 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 01:30:49.211108 kernel: SELinux: policy capability open_perms=1 Sep 13 01:30:49.211116 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 01:30:49.211124 kernel: SELinux: policy capability always_check_network=0 Sep 13 01:30:49.211134 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 01:30:49.211142 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 01:30:49.211149 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 01:30:49.211157 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 01:30:49.211166 kernel: kauditd_printk_skb: 37 callbacks suppressed Sep 13 01:30:49.211175 kernel: audit: type=1403 audit(1757727034.281:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 01:30:49.211184 systemd[1]: Successfully loaded SELinux policy in 443.027ms. Sep 13 01:30:49.211195 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.217ms. Sep 13 01:30:49.211206 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 01:30:49.211216 systemd[1]: Detected virtualization microsoft. Sep 13 01:30:49.211225 systemd[1]: Detected architecture arm64. Sep 13 01:30:49.211233 systemd[1]: Detected first boot. Sep 13 01:30:49.211243 systemd[1]: Hostname set to . Sep 13 01:30:49.211251 systemd[1]: Initializing machine ID from random generator. Sep 13 01:30:49.211260 kernel: audit: type=1400 audit(1757727035.413:82): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:30:49.211271 kernel: audit: type=1400 audit(1757727035.413:83): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:30:49.211280 kernel: audit: type=1334 audit(1757727035.431:84): prog-id=10 op=LOAD Sep 13 01:30:49.211288 kernel: audit: type=1334 audit(1757727035.431:85): prog-id=10 op=UNLOAD Sep 13 01:30:49.211296 kernel: audit: type=1334 audit(1757727035.448:86): prog-id=11 op=LOAD Sep 13 01:30:49.211305 kernel: audit: type=1334 audit(1757727035.448:87): prog-id=11 op=UNLOAD Sep 13 01:30:49.211313 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 01:30:49.211323 kernel: audit: type=1400 audit(1757727037.181:88): avc: denied { associate } for pid=1102 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 01:30:49.211334 kernel: audit: type=1300 audit(1757727037.181:88): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001453a4 a1=40000c6708 a2=40000ccc00 a3=32 items=0 ppid=1085 pid=1102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:30:49.211343 kernel: audit: type=1327 audit(1757727037.181:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 01:30:49.211352 systemd[1]: Populated /etc with preset unit settings. Sep 13 01:30:49.211361 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:30:49.211371 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:30:49.211381 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:30:49.211390 kernel: kauditd_printk_skb: 6 callbacks suppressed Sep 13 01:30:49.211398 kernel: audit: type=1334 audit(1757727048.438:90): prog-id=12 op=LOAD Sep 13 01:30:49.211407 kernel: audit: type=1334 audit(1757727048.438:91): prog-id=3 op=UNLOAD Sep 13 01:30:49.211415 kernel: audit: type=1334 audit(1757727048.444:92): prog-id=13 op=LOAD Sep 13 01:30:49.211424 kernel: audit: type=1334 audit(1757727048.450:93): prog-id=14 op=LOAD Sep 13 01:30:49.211434 kernel: audit: type=1334 audit(1757727048.450:94): prog-id=4 op=UNLOAD Sep 13 01:30:49.211444 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 01:30:49.211452 kernel: audit: type=1334 audit(1757727048.450:95): prog-id=5 op=UNLOAD Sep 13 01:30:49.211461 systemd[1]: Stopped iscsid.service. Sep 13 01:30:49.211471 kernel: audit: type=1334 audit(1757727048.455:96): prog-id=15 op=LOAD Sep 13 01:30:49.211480 kernel: audit: type=1334 audit(1757727048.455:97): prog-id=12 op=UNLOAD Sep 13 01:30:49.211488 kernel: audit: type=1334 audit(1757727048.461:98): prog-id=16 op=LOAD Sep 13 01:30:49.211497 kernel: audit: type=1334 audit(1757727048.466:99): prog-id=17 op=LOAD Sep 13 01:30:49.211505 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 01:30:49.211515 systemd[1]: Stopped initrd-switch-root.service. Sep 13 01:30:49.211525 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 01:30:49.211535 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 01:30:49.211545 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 01:30:49.211554 systemd[1]: Created slice system-getty.slice. Sep 13 01:30:49.211563 systemd[1]: Created slice system-modprobe.slice. Sep 13 01:30:49.211572 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 01:30:49.211594 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 01:30:49.211604 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 01:30:49.211613 systemd[1]: Created slice user.slice. Sep 13 01:30:49.211622 systemd[1]: Started systemd-ask-password-console.path. Sep 13 01:30:49.211633 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 01:30:49.211642 systemd[1]: Set up automount boot.automount. Sep 13 01:30:49.211651 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 01:30:49.211660 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 01:30:49.211669 systemd[1]: Stopped target initrd-fs.target. Sep 13 01:30:49.211678 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 01:30:49.211687 systemd[1]: Reached target integritysetup.target. Sep 13 01:30:49.211696 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 01:30:49.211706 systemd[1]: Reached target remote-fs.target. Sep 13 01:30:49.211715 systemd[1]: Reached target slices.target. Sep 13 01:30:49.211725 systemd[1]: Reached target swap.target. Sep 13 01:30:49.211735 systemd[1]: Reached target torcx.target. Sep 13 01:30:49.211744 systemd[1]: Reached target veritysetup.target. Sep 13 01:30:49.211753 systemd[1]: Listening on systemd-coredump.socket. Sep 13 01:30:49.211764 systemd[1]: Listening on systemd-initctl.socket. Sep 13 01:30:49.211773 systemd[1]: Listening on systemd-networkd.socket. Sep 13 01:30:49.211782 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 01:30:49.211791 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 01:30:49.211801 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 01:30:49.211810 systemd[1]: Mounting dev-hugepages.mount... Sep 13 01:30:49.211820 systemd[1]: Mounting dev-mqueue.mount... Sep 13 01:30:49.211829 systemd[1]: Mounting media.mount... Sep 13 01:30:49.211839 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 01:30:49.211848 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 01:30:49.211858 systemd[1]: Mounting tmp.mount... Sep 13 01:30:49.211867 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 01:30:49.211876 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:30:49.211885 systemd[1]: Starting kmod-static-nodes.service... Sep 13 01:30:49.211894 systemd[1]: Starting modprobe@configfs.service... Sep 13 01:30:49.211903 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:30:49.211912 systemd[1]: Starting modprobe@drm.service... Sep 13 01:30:49.211923 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:30:49.211933 systemd[1]: Starting modprobe@fuse.service... Sep 13 01:30:49.211942 systemd[1]: Starting modprobe@loop.service... Sep 13 01:30:49.211951 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 01:30:49.211961 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 01:30:49.211970 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 01:30:49.211979 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 01:30:49.211989 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 01:30:49.211999 systemd[1]: Stopped systemd-journald.service. Sep 13 01:30:49.212009 systemd[1]: systemd-journald.service: Consumed 3.065s CPU time. Sep 13 01:30:49.212018 systemd[1]: Starting systemd-journald.service... Sep 13 01:30:49.212026 kernel: loop: module loaded Sep 13 01:30:49.212035 systemd[1]: Starting systemd-modules-load.service... Sep 13 01:30:49.212044 kernel: fuse: init (API version 7.34) Sep 13 01:30:49.212053 systemd[1]: Starting systemd-network-generator.service... Sep 13 01:30:49.212062 systemd[1]: Starting systemd-remount-fs.service... Sep 13 01:30:49.212071 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 01:30:49.212082 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 01:30:49.212091 systemd[1]: Stopped verity-setup.service. Sep 13 01:30:49.212100 systemd[1]: Mounted dev-hugepages.mount. Sep 13 01:30:49.212109 systemd[1]: Mounted dev-mqueue.mount. Sep 13 01:30:49.212119 systemd[1]: Mounted media.mount. Sep 13 01:30:49.212128 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 01:30:49.212138 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 01:30:49.212147 systemd[1]: Mounted tmp.mount. Sep 13 01:30:49.212159 systemd-journald[1208]: Journal started Sep 13 01:30:49.212196 systemd-journald[1208]: Runtime Journal (/run/log/journal/db56f85be5ea4f8bb4de3a382228462c) is 8.0M, max 78.5M, 70.5M free. Sep 13 01:30:34.281000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 01:30:35.413000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:30:35.413000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:30:35.431000 audit: BPF prog-id=10 op=LOAD Sep 13 01:30:35.431000 audit: BPF prog-id=10 op=UNLOAD Sep 13 01:30:35.448000 audit: BPF prog-id=11 op=LOAD Sep 13 01:30:35.448000 audit: BPF prog-id=11 op=UNLOAD Sep 13 01:30:37.181000 audit[1102]: AVC avc: denied { associate } for pid=1102 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 01:30:37.181000 audit[1102]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001453a4 a1=40000c6708 a2=40000ccc00 a3=32 items=0 ppid=1085 pid=1102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:30:37.181000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 01:30:37.190000 audit[1102]: AVC avc: denied { associate } for pid=1102 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 01:30:37.190000 audit[1102]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145479 a2=1ed a3=0 items=2 ppid=1085 pid=1102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:30:37.190000 audit: CWD cwd="/" Sep 13 01:30:37.190000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:30:37.190000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:30:37.190000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 01:30:48.438000 audit: BPF prog-id=12 op=LOAD Sep 13 01:30:48.438000 audit: BPF prog-id=3 op=UNLOAD Sep 13 01:30:48.444000 audit: BPF prog-id=13 op=LOAD Sep 13 01:30:48.450000 audit: BPF prog-id=14 op=LOAD Sep 13 01:30:48.450000 audit: BPF prog-id=4 op=UNLOAD Sep 13 01:30:48.450000 audit: BPF prog-id=5 op=UNLOAD Sep 13 01:30:48.455000 audit: BPF prog-id=15 op=LOAD Sep 13 01:30:48.455000 audit: BPF prog-id=12 op=UNLOAD Sep 13 01:30:48.461000 audit: BPF prog-id=16 op=LOAD Sep 13 01:30:48.466000 audit: BPF prog-id=17 op=LOAD Sep 13 01:30:48.466000 audit: BPF prog-id=13 op=UNLOAD Sep 13 01:30:48.466000 audit: BPF prog-id=14 op=UNLOAD Sep 13 01:30:48.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:48.488000 audit: BPF prog-id=15 op=UNLOAD Sep 13 01:30:48.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:48.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:48.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.054000 audit: BPF prog-id=18 op=LOAD Sep 13 01:30:49.054000 audit: BPF prog-id=19 op=LOAD Sep 13 01:30:49.054000 audit: BPF prog-id=20 op=LOAD Sep 13 01:30:49.054000 audit: BPF prog-id=16 op=UNLOAD Sep 13 01:30:49.054000 audit: BPF prog-id=17 op=UNLOAD Sep 13 01:30:49.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.208000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 01:30:49.208000 audit[1208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffce21e950 a2=4000 a3=1 items=0 ppid=1 pid=1208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:30:49.208000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 01:30:37.066473 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:30:48.437790 systemd[1]: Queued start job for default target multi-user.target. Sep 13 01:30:37.106333 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 01:30:48.437802 systemd[1]: Unnecessary job was removed for dev-sda6.device. Sep 13 01:30:37.106364 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 01:30:48.468056 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 01:30:37.106402 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:37Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 01:30:48.468410 systemd[1]: systemd-journald.service: Consumed 3.065s CPU time. Sep 13 01:30:37.106411 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:37Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 01:30:37.106447 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:37Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 01:30:37.106459 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:37Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 01:30:37.106681 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:37Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 01:30:37.106715 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 01:30:37.106727 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 01:30:37.166488 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 01:30:37.166551 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 01:30:37.166608 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 01:30:37.166637 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 01:30:37.166663 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 01:30:37.166676 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 01:30:44.326811 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:44Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:30:44.327075 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:44Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:30:44.327173 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:44Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:30:44.327329 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:44Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:30:44.327377 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:44Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 01:30:44.327431 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-09-13T01:30:44Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 01:30:49.224599 systemd[1]: Started systemd-journald.service. Sep 13 01:30:49.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.225463 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 01:30:49.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.230077 systemd[1]: Finished kmod-static-nodes.service. Sep 13 01:30:49.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.235232 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 01:30:49.235355 systemd[1]: Finished modprobe@configfs.service. Sep 13 01:30:49.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.240401 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:30:49.240522 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:30:49.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.245111 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 01:30:49.245227 systemd[1]: Finished modprobe@drm.service. Sep 13 01:30:49.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.249696 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:30:49.249817 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:30:49.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.254804 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 01:30:49.254922 systemd[1]: Finished modprobe@fuse.service. Sep 13 01:30:49.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.259536 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:30:49.259663 systemd[1]: Finished modprobe@loop.service. Sep 13 01:30:49.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.264266 systemd[1]: Finished systemd-network-generator.service. Sep 13 01:30:49.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.269692 systemd[1]: Finished systemd-remount-fs.service. Sep 13 01:30:49.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.274961 systemd[1]: Reached target network-pre.target. Sep 13 01:30:49.280703 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 01:30:49.285992 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 01:30:49.289899 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 01:30:49.348324 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 01:30:49.353602 systemd[1]: Starting systemd-journal-flush.service... Sep 13 01:30:49.357830 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:30:49.358840 systemd[1]: Starting systemd-random-seed.service... Sep 13 01:30:49.363459 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:30:49.364468 systemd[1]: Starting systemd-sysusers.service... Sep 13 01:30:49.371421 systemd[1]: Finished systemd-modules-load.service. Sep 13 01:30:49.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.377058 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 01:30:49.382261 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 01:30:49.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.387029 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 01:30:49.392287 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:30:49.397060 systemd[1]: Starting systemd-udev-settle.service... Sep 13 01:30:49.404238 udevadm[1222]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 01:30:49.446682 systemd-journald[1208]: Time spent on flushing to /var/log/journal/db56f85be5ea4f8bb4de3a382228462c is 14.754ms for 1109 entries. Sep 13 01:30:49.446682 systemd-journald[1208]: System Journal (/var/log/journal/db56f85be5ea4f8bb4de3a382228462c) is 8.0M, max 2.6G, 2.6G free. Sep 13 01:30:49.546756 systemd-journald[1208]: Received client request to flush runtime journal. Sep 13 01:30:49.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.462181 systemd[1]: Finished systemd-random-seed.service. Sep 13 01:30:49.467114 systemd[1]: Reached target first-boot-complete.target. Sep 13 01:30:49.548047 systemd[1]: Finished systemd-journal-flush.service. Sep 13 01:30:49.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:49.656899 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:30:49.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:50.366844 systemd[1]: Finished systemd-sysusers.service. Sep 13 01:30:50.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:50.372927 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 01:30:51.632178 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 01:30:51.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:51.861813 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 01:30:51.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:51.866000 audit: BPF prog-id=21 op=LOAD Sep 13 01:30:51.867000 audit: BPF prog-id=22 op=LOAD Sep 13 01:30:51.867000 audit: BPF prog-id=7 op=UNLOAD Sep 13 01:30:51.867000 audit: BPF prog-id=8 op=UNLOAD Sep 13 01:30:51.868228 systemd[1]: Starting systemd-udevd.service... Sep 13 01:30:51.886084 systemd-udevd[1227]: Using default interface naming scheme 'v252'. Sep 13 01:30:53.347334 systemd[1]: Started systemd-udevd.service. Sep 13 01:30:53.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:53.356000 audit: BPF prog-id=23 op=LOAD Sep 13 01:30:53.358722 systemd[1]: Starting systemd-networkd.service... Sep 13 01:30:53.383064 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Sep 13 01:30:53.495271 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 01:30:53.495393 kernel: hv_vmbus: registering driver hv_balloon Sep 13 01:30:53.495423 kernel: kauditd_printk_skb: 51 callbacks suppressed Sep 13 01:30:53.495455 kernel: audit: type=1334 audit(1757727053.484:149): prog-id=24 op=LOAD Sep 13 01:30:53.484000 audit: BPF prog-id=24 op=LOAD Sep 13 01:30:53.493643 systemd[1]: Starting systemd-userdbd.service... Sep 13 01:30:53.513723 kernel: audit: type=1334 audit(1757727053.484:150): prog-id=25 op=LOAD Sep 13 01:30:53.513840 kernel: audit: type=1334 audit(1757727053.484:151): prog-id=26 op=LOAD Sep 13 01:30:53.513870 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 13 01:30:53.513888 kernel: audit: type=1400 audit(1757727053.476:152): avc: denied { confidentiality } for pid=1240 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 01:30:53.484000 audit: BPF prog-id=25 op=LOAD Sep 13 01:30:53.484000 audit: BPF prog-id=26 op=LOAD Sep 13 01:30:53.476000 audit[1240]: AVC avc: denied { confidentiality } for pid=1240 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 01:30:53.549568 kernel: hv_vmbus: registering driver hyperv_fb Sep 13 01:30:53.549703 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 13 01:30:53.549740 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 13 01:30:53.549775 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 13 01:30:53.565623 kernel: Console: switching to colour dummy device 80x25 Sep 13 01:30:53.567662 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 01:30:53.476000 audit[1240]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaae375b7d0 a1=aa2c a2=ffff987724b0 a3=aaaae36b8010 items=12 ppid=1227 pid=1240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:30:53.476000 audit: CWD cwd="/" Sep 13 01:30:53.607030 kernel: audit: type=1300 audit(1757727053.476:152): arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaae375b7d0 a1=aa2c a2=ffff987724b0 a3=aaaae36b8010 items=12 ppid=1227 pid=1240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:30:53.607103 kernel: audit: type=1307 audit(1757727053.476:152): cwd="/" Sep 13 01:30:53.607126 kernel: audit: type=1302 audit(1757727053.476:152): item=0 name=(null) inode=7181 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:30:53.476000 audit: PATH item=0 name=(null) inode=7181 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:30:53.476000 audit: PATH item=1 name=(null) inode=10687 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:30:53.637433 kernel: audit: type=1302 audit(1757727053.476:152): item=1 name=(null) inode=10687 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:30:53.646098 systemd[1]: Started systemd-userdbd.service. Sep 13 01:30:53.666241 kernel: hv_utils: Registering HyperV Utility Driver Sep 13 01:30:53.666346 kernel: audit: type=1302 audit(1757727053.476:152): item=2 name=(null) inode=10687 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:30:53.666418 kernel: hv_vmbus: registering driver hv_utils Sep 13 01:30:53.476000 audit: PATH item=2 name=(null) inode=10687 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:30:53.476000 audit: PATH item=3 name=(null) inode=10688 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:30:53.685152 kernel: audit: type=1302 audit(1757727053.476:152): item=3 name=(null) inode=10688 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:30:53.685247 kernel: hv_utils: Heartbeat IC version 3.0 Sep 13 01:30:53.476000 audit: PATH item=4 name=(null) inode=10687 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:30:53.476000 audit: PATH item=5 name=(null) inode=10689 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:30:53.476000 audit: PATH item=6 name=(null) inode=10687 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:30:53.476000 audit: PATH item=7 name=(null) inode=10690 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:30:53.476000 audit: PATH item=8 name=(null) inode=10687 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:30:53.476000 audit: PATH item=9 name=(null) inode=10691 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:30:53.476000 audit: PATH item=10 name=(null) inode=10687 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:30:53.476000 audit: PATH item=11 name=(null) inode=10692 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:30:53.476000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 01:30:53.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:53.691745 kernel: hv_utils: Shutdown IC version 3.2 Sep 13 01:30:53.695257 kernel: hv_utils: TimeSync IC version 4.0 Sep 13 01:30:54.096394 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 01:30:54.102994 systemd[1]: Finished systemd-udev-settle.service. Sep 13 01:30:54.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:54.108989 systemd[1]: Starting lvm2-activation-early.service... Sep 13 01:30:54.454773 lvm[1304]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 01:30:54.517464 systemd[1]: Finished lvm2-activation-early.service. Sep 13 01:30:54.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:54.522443 systemd[1]: Reached target cryptsetup.target. Sep 13 01:30:54.527923 systemd[1]: Starting lvm2-activation.service... Sep 13 01:30:54.531807 lvm[1305]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 01:30:54.551435 systemd[1]: Finished lvm2-activation.service. Sep 13 01:30:54.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:54.555926 systemd[1]: Reached target local-fs-pre.target. Sep 13 01:30:54.560505 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 01:30:54.560533 systemd[1]: Reached target local-fs.target. Sep 13 01:30:54.564842 systemd[1]: Reached target machines.target. Sep 13 01:30:54.570083 systemd[1]: Starting ldconfig.service... Sep 13 01:30:54.581608 systemd-networkd[1248]: lo: Link UP Sep 13 01:30:54.581615 systemd-networkd[1248]: lo: Gained carrier Sep 13 01:30:54.582036 systemd-networkd[1248]: Enumeration completed Sep 13 01:30:54.612842 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:30:54.612914 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:30:54.614106 systemd[1]: Starting systemd-boot-update.service... Sep 13 01:30:54.618609 systemd-networkd[1248]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:30:54.619476 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 01:30:54.626440 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 01:30:54.632017 systemd[1]: Starting systemd-sysext.service... Sep 13 01:30:54.636120 systemd[1]: Started systemd-networkd.service. Sep 13 01:30:54.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:54.641812 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 01:30:54.682389 kernel: mlx5_core 1214:00:02.0 enP4628s1: Link up Sep 13 01:30:54.682659 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 01:30:54.690160 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1307 (bootctl) Sep 13 01:30:54.691414 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 01:30:54.726608 kernel: hv_netvsc 0022487a-600d-0022-487a-600d0022487a eth0: Data path switched to VF: enP4628s1 Sep 13 01:30:54.728495 systemd-networkd[1248]: enP4628s1: Link UP Sep 13 01:30:54.728981 systemd-networkd[1248]: eth0: Link UP Sep 13 01:30:54.729053 systemd-networkd[1248]: eth0: Gained carrier Sep 13 01:30:54.733118 systemd-networkd[1248]: enP4628s1: Gained carrier Sep 13 01:30:54.738720 systemd-networkd[1248]: eth0: DHCPv4 address 10.200.20.47/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 01:30:54.757984 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 01:30:54.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:54.798579 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 01:30:54.799159 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 01:30:54.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:54.844742 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 01:30:54.898863 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 01:30:54.899060 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 01:30:54.965629 kernel: loop0: detected capacity change from 0 to 211168 Sep 13 01:30:55.042807 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 01:30:55.042935 systemd-fsck[1315]: fsck.fat 4.2 (2021-01-31) Sep 13 01:30:55.042935 systemd-fsck[1315]: /dev/sda1: 236 files, 117310/258078 clusters Sep 13 01:30:55.042812 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 01:30:55.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.050156 systemd[1]: Mounting boot.mount... Sep 13 01:30:55.060832 systemd[1]: Mounted boot.mount. Sep 13 01:30:55.074636 kernel: loop1: detected capacity change from 0 to 211168 Sep 13 01:30:55.080796 systemd[1]: Finished systemd-boot-update.service. Sep 13 01:30:55.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.093141 (sd-sysext)[1324]: Using extensions 'kubernetes'. Sep 13 01:30:55.093475 (sd-sysext)[1324]: Merged extensions into '/usr'. Sep 13 01:30:55.109634 systemd[1]: Mounting usr-share-oem.mount... Sep 13 01:30:55.113960 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:30:55.115090 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:30:55.119956 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:30:55.124920 systemd[1]: Starting modprobe@loop.service... Sep 13 01:30:55.129189 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:30:55.129333 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:30:55.131772 systemd[1]: Mounted usr-share-oem.mount. Sep 13 01:30:55.136077 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:30:55.136199 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:30:55.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.140864 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:30:55.140977 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:30:55.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.145918 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:30:55.146025 systemd[1]: Finished modprobe@loop.service. Sep 13 01:30:55.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.150926 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:30:55.151023 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:30:55.152032 systemd[1]: Finished systemd-sysext.service. Sep 13 01:30:55.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.157498 systemd[1]: Starting ensure-sysext.service... Sep 13 01:30:55.162414 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 01:30:55.170662 systemd[1]: Reloading. Sep 13 01:30:55.193921 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 01:30:55.217658 /usr/lib/systemd/system-generators/torcx-generator[1350]: time="2025-09-13T01:30:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:30:55.217688 /usr/lib/systemd/system-generators/torcx-generator[1350]: time="2025-09-13T01:30:55Z" level=info msg="torcx already run" Sep 13 01:30:55.227635 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 01:30:55.268072 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 01:30:55.296384 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:30:55.296402 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:30:55.311560 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:30:55.374000 audit: BPF prog-id=27 op=LOAD Sep 13 01:30:55.374000 audit: BPF prog-id=24 op=UNLOAD Sep 13 01:30:55.374000 audit: BPF prog-id=28 op=LOAD Sep 13 01:30:55.374000 audit: BPF prog-id=29 op=LOAD Sep 13 01:30:55.374000 audit: BPF prog-id=25 op=UNLOAD Sep 13 01:30:55.374000 audit: BPF prog-id=26 op=UNLOAD Sep 13 01:30:55.375000 audit: BPF prog-id=30 op=LOAD Sep 13 01:30:55.375000 audit: BPF prog-id=31 op=LOAD Sep 13 01:30:55.375000 audit: BPF prog-id=21 op=UNLOAD Sep 13 01:30:55.375000 audit: BPF prog-id=22 op=UNLOAD Sep 13 01:30:55.376000 audit: BPF prog-id=32 op=LOAD Sep 13 01:30:55.376000 audit: BPF prog-id=23 op=UNLOAD Sep 13 01:30:55.376000 audit: BPF prog-id=33 op=LOAD Sep 13 01:30:55.376000 audit: BPF prog-id=18 op=UNLOAD Sep 13 01:30:55.376000 audit: BPF prog-id=34 op=LOAD Sep 13 01:30:55.377000 audit: BPF prog-id=35 op=LOAD Sep 13 01:30:55.377000 audit: BPF prog-id=19 op=UNLOAD Sep 13 01:30:55.377000 audit: BPF prog-id=20 op=UNLOAD Sep 13 01:30:55.392568 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:30:55.393734 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:30:55.399172 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:30:55.404323 systemd[1]: Starting modprobe@loop.service... Sep 13 01:30:55.408081 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:30:55.408200 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:30:55.409001 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:30:55.409132 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:30:55.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.414649 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:30:55.414772 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:30:55.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.419797 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:30:55.419914 systemd[1]: Finished modprobe@loop.service. Sep 13 01:30:55.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.425794 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:30:55.426992 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:30:55.431885 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:30:55.436893 systemd[1]: Starting modprobe@loop.service... Sep 13 01:30:55.440643 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:30:55.440761 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:30:55.441495 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:30:55.441634 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:30:55.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.446267 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:30:55.446378 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:30:55.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.451447 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:30:55.451558 systemd[1]: Finished modprobe@loop.service. Sep 13 01:30:55.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.456666 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:30:55.456759 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:30:55.458998 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:30:55.460173 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:30:55.465047 systemd[1]: Starting modprobe@drm.service... Sep 13 01:30:55.469810 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:30:55.475275 systemd[1]: Starting modprobe@loop.service... Sep 13 01:30:55.479029 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:30:55.479145 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:30:55.480017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:30:55.480137 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:30:55.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.484800 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 01:30:55.484911 systemd[1]: Finished modprobe@drm.service. Sep 13 01:30:55.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.489457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:30:55.489568 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:30:55.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.494814 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:30:55.494933 systemd[1]: Finished modprobe@loop.service. Sep 13 01:30:55.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.499516 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:30:55.499582 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:30:55.500518 systemd[1]: Finished ensure-sysext.service. Sep 13 01:30:55.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:55.812807 systemd-networkd[1248]: eth0: Gained IPv6LL Sep 13 01:30:55.818428 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 01:30:55.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:58.676554 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 01:30:58.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:58.683360 systemd[1]: Starting audit-rules.service... Sep 13 01:30:58.686491 kernel: kauditd_printk_skb: 65 callbacks suppressed Sep 13 01:30:58.686539 kernel: audit: type=1130 audit(1757727058.681:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:58.709788 systemd[1]: Starting clean-ca-certificates.service... Sep 13 01:30:58.715651 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 01:30:58.720000 audit: BPF prog-id=36 op=LOAD Sep 13 01:30:58.722395 systemd[1]: Starting systemd-resolved.service... Sep 13 01:30:58.727687 kernel: audit: type=1334 audit(1757727058.720:210): prog-id=36 op=LOAD Sep 13 01:30:58.731000 audit: BPF prog-id=37 op=LOAD Sep 13 01:30:58.739096 systemd[1]: Starting systemd-timesyncd.service... Sep 13 01:30:58.741607 kernel: audit: type=1334 audit(1757727058.731:211): prog-id=37 op=LOAD Sep 13 01:30:58.744430 systemd[1]: Starting systemd-update-utmp.service... Sep 13 01:30:58.808000 audit[1427]: SYSTEM_BOOT pid=1427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 01:30:58.817208 systemd[1]: Finished systemd-update-utmp.service. Sep 13 01:30:58.834018 kernel: audit: type=1127 audit(1757727058.808:212): pid=1427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 01:30:58.834106 kernel: audit: type=1130 audit(1757727058.831:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:58.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:58.834000 systemd[1]: Finished clean-ca-certificates.service. Sep 13 01:30:58.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:58.856311 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 01:30:58.874620 kernel: audit: type=1130 audit(1757727058.854:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:58.889416 systemd[1]: Started systemd-timesyncd.service. Sep 13 01:30:58.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:58.894185 systemd[1]: Reached target time-set.target. Sep 13 01:30:58.915354 kernel: audit: type=1130 audit(1757727058.892:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:58.967269 systemd-resolved[1425]: Positive Trust Anchors: Sep 13 01:30:58.967280 systemd-resolved[1425]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 01:30:58.967306 systemd-resolved[1425]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 01:30:59.049664 systemd-resolved[1425]: Using system hostname 'ci-3510.3.8-n-a3199d6d1b'. Sep 13 01:30:59.051336 systemd[1]: Started systemd-resolved.service. Sep 13 01:30:59.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:59.055898 systemd[1]: Reached target network.target. Sep 13 01:30:59.077660 kernel: audit: type=1130 audit(1757727059.054:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:59.078022 systemd[1]: Reached target network-online.target. Sep 13 01:30:59.082717 systemd[1]: Reached target nss-lookup.target. Sep 13 01:30:59.192435 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 01:30:59.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:59.217606 kernel: audit: type=1130 audit(1757727059.196:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:30:59.302489 systemd-timesyncd[1426]: Contacted time server 193.29.63.226:123 (0.flatcar.pool.ntp.org). Sep 13 01:30:59.302561 systemd-timesyncd[1426]: Initial clock synchronization to Sat 2025-09-13 01:30:59.302917 UTC. Sep 13 01:30:59.409380 augenrules[1442]: No rules Sep 13 01:30:59.407000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 01:30:59.421637 systemd[1]: Finished audit-rules.service. Sep 13 01:30:59.407000 audit[1442]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe2547920 a2=420 a3=0 items=0 ppid=1421 pid=1442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:30:59.407000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 01:30:59.423599 kernel: audit: type=1305 audit(1757727059.407:218): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 01:31:08.205957 ldconfig[1306]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 01:31:08.217021 systemd[1]: Finished ldconfig.service. Sep 13 01:31:08.223147 systemd[1]: Starting systemd-update-done.service... Sep 13 01:31:08.291435 systemd[1]: Finished systemd-update-done.service. Sep 13 01:31:08.296323 systemd[1]: Reached target sysinit.target. Sep 13 01:31:08.300687 systemd[1]: Started motdgen.path. Sep 13 01:31:08.304274 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 01:31:08.310252 systemd[1]: Started logrotate.timer. Sep 13 01:31:08.314056 systemd[1]: Started mdadm.timer. Sep 13 01:31:08.317766 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 01:31:08.322286 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 01:31:08.322317 systemd[1]: Reached target paths.target. Sep 13 01:31:08.326361 systemd[1]: Reached target timers.target. Sep 13 01:31:08.330764 systemd[1]: Listening on dbus.socket. Sep 13 01:31:08.335724 systemd[1]: Starting docker.socket... Sep 13 01:31:08.373643 systemd[1]: Listening on sshd.socket. Sep 13 01:31:08.377708 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:31:08.378200 systemd[1]: Listening on docker.socket. Sep 13 01:31:08.382768 systemd[1]: Reached target sockets.target. Sep 13 01:31:08.386878 systemd[1]: Reached target basic.target. Sep 13 01:31:08.390896 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 01:31:08.390923 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 01:31:08.391994 systemd[1]: Starting containerd.service... Sep 13 01:31:08.396759 systemd[1]: Starting dbus.service... Sep 13 01:31:08.400947 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 01:31:08.406111 systemd[1]: Starting extend-filesystems.service... Sep 13 01:31:08.410376 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 01:31:08.432434 systemd[1]: Starting kubelet.service... Sep 13 01:31:08.437021 systemd[1]: Starting motdgen.service... Sep 13 01:31:08.441310 systemd[1]: Started nvidia.service. Sep 13 01:31:08.446488 systemd[1]: Starting prepare-helm.service... Sep 13 01:31:08.451250 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 01:31:08.456544 systemd[1]: Starting sshd-keygen.service... Sep 13 01:31:08.462284 systemd[1]: Starting systemd-logind.service... Sep 13 01:31:08.466306 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:31:08.466378 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 01:31:08.466778 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 01:31:08.467421 systemd[1]: Starting update-engine.service... Sep 13 01:31:08.472307 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 01:31:08.485532 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 01:31:08.485783 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 01:31:08.540832 extend-filesystems[1453]: Found loop1 Sep 13 01:31:08.544997 extend-filesystems[1453]: Found sda Sep 13 01:31:08.544997 extend-filesystems[1453]: Found sda1 Sep 13 01:31:08.544997 extend-filesystems[1453]: Found sda2 Sep 13 01:31:08.544997 extend-filesystems[1453]: Found sda3 Sep 13 01:31:08.544997 extend-filesystems[1453]: Found usr Sep 13 01:31:08.544997 extend-filesystems[1453]: Found sda4 Sep 13 01:31:08.544997 extend-filesystems[1453]: Found sda6 Sep 13 01:31:08.544997 extend-filesystems[1453]: Found sda7 Sep 13 01:31:08.544997 extend-filesystems[1453]: Found sda9 Sep 13 01:31:08.544997 extend-filesystems[1453]: Checking size of /dev/sda9 Sep 13 01:31:08.591176 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 01:31:08.591353 systemd[1]: Finished motdgen.service. Sep 13 01:31:08.623032 jq[1464]: true Sep 13 01:31:08.623295 jq[1452]: false Sep 13 01:31:08.635137 systemd-logind[1462]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 01:31:08.637974 systemd-logind[1462]: New seat seat0. Sep 13 01:31:08.642105 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 01:31:08.642258 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 01:31:08.686277 env[1477]: time="2025-09-13T01:31:08.686218715Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 01:31:08.687940 jq[1489]: true Sep 13 01:31:08.693441 tar[1468]: linux-arm64/LICENSE Sep 13 01:31:08.693441 tar[1468]: linux-arm64/helm Sep 13 01:31:08.714938 extend-filesystems[1453]: Old size kept for /dev/sda9 Sep 13 01:31:08.714938 extend-filesystems[1453]: Found sr0 Sep 13 01:31:08.719947 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 01:31:08.720110 systemd[1]: Finished extend-filesystems.service. Sep 13 01:31:08.745816 env[1477]: time="2025-09-13T01:31:08.745771536Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 01:31:08.745981 env[1477]: time="2025-09-13T01:31:08.745952500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:31:08.748747 env[1477]: time="2025-09-13T01:31:08.748709516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:31:08.748747 env[1477]: time="2025-09-13T01:31:08.748743717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:31:08.748989 env[1477]: time="2025-09-13T01:31:08.748959322Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:31:08.748989 env[1477]: time="2025-09-13T01:31:08.748984722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 01:31:08.749048 env[1477]: time="2025-09-13T01:31:08.748998202Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 01:31:08.749048 env[1477]: time="2025-09-13T01:31:08.749007763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 01:31:08.749105 env[1477]: time="2025-09-13T01:31:08.749084484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:31:08.749313 env[1477]: time="2025-09-13T01:31:08.749289728Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:31:08.749456 env[1477]: time="2025-09-13T01:31:08.749433091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:31:08.749482 env[1477]: time="2025-09-13T01:31:08.749453972Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 01:31:08.749526 env[1477]: time="2025-09-13T01:31:08.749506893Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 01:31:08.749555 env[1477]: time="2025-09-13T01:31:08.749523893Z" level=info msg="metadata content store policy set" policy=shared Sep 13 01:31:08.776032 env[1477]: time="2025-09-13T01:31:08.775984636Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 01:31:08.776032 env[1477]: time="2025-09-13T01:31:08.776030757Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 01:31:08.776168 env[1477]: time="2025-09-13T01:31:08.776044797Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 01:31:08.776168 env[1477]: time="2025-09-13T01:31:08.776079198Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 01:31:08.776168 env[1477]: time="2025-09-13T01:31:08.776097038Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 01:31:08.776168 env[1477]: time="2025-09-13T01:31:08.776111118Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 01:31:08.776168 env[1477]: time="2025-09-13T01:31:08.776123359Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 01:31:08.777112 env[1477]: time="2025-09-13T01:31:08.777069938Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 01:31:08.777179 env[1477]: time="2025-09-13T01:31:08.777111619Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 01:31:08.777179 env[1477]: time="2025-09-13T01:31:08.777128779Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 01:31:08.777179 env[1477]: time="2025-09-13T01:31:08.777142340Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 01:31:08.777179 env[1477]: time="2025-09-13T01:31:08.777155260Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 01:31:08.777286 env[1477]: time="2025-09-13T01:31:08.777270422Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 01:31:08.777389 env[1477]: time="2025-09-13T01:31:08.777365984Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 01:31:08.778696 env[1477]: time="2025-09-13T01:31:08.778656491Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 01:31:08.778791 env[1477]: time="2025-09-13T01:31:08.778703252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 01:31:08.778791 env[1477]: time="2025-09-13T01:31:08.778718252Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 01:31:08.778791 env[1477]: time="2025-09-13T01:31:08.778762613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 01:31:08.778791 env[1477]: time="2025-09-13T01:31:08.778774893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 01:31:08.778876 env[1477]: time="2025-09-13T01:31:08.778803054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 01:31:08.778876 env[1477]: time="2025-09-13T01:31:08.778816534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 01:31:08.778876 env[1477]: time="2025-09-13T01:31:08.778829454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 01:31:08.778876 env[1477]: time="2025-09-13T01:31:08.778842175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 01:31:08.778876 env[1477]: time="2025-09-13T01:31:08.778853455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 01:31:08.778876 env[1477]: time="2025-09-13T01:31:08.778864735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 01:31:08.778988 env[1477]: time="2025-09-13T01:31:08.778877975Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 01:31:08.779011 env[1477]: time="2025-09-13T01:31:08.778999658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 01:31:08.779030 env[1477]: time="2025-09-13T01:31:08.779015338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 01:31:08.779050 env[1477]: time="2025-09-13T01:31:08.779028098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 01:31:08.779050 env[1477]: time="2025-09-13T01:31:08.779039939Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 01:31:08.779086 env[1477]: time="2025-09-13T01:31:08.779053139Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 01:31:08.779086 env[1477]: time="2025-09-13T01:31:08.779063939Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 01:31:08.779086 env[1477]: time="2025-09-13T01:31:08.779080339Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 01:31:08.779142 env[1477]: time="2025-09-13T01:31:08.779113300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 01:31:08.779357 env[1477]: time="2025-09-13T01:31:08.779303144Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 01:31:08.797799 env[1477]: time="2025-09-13T01:31:08.779361905Z" level=info msg="Connect containerd service" Sep 13 01:31:08.797799 env[1477]: time="2025-09-13T01:31:08.779397226Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 01:31:08.797799 env[1477]: time="2025-09-13T01:31:08.783867838Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 01:31:08.797799 env[1477]: time="2025-09-13T01:31:08.784090402Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 01:31:08.797799 env[1477]: time="2025-09-13T01:31:08.784123243Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 01:31:08.797799 env[1477]: time="2025-09-13T01:31:08.784167564Z" level=info msg="containerd successfully booted in 0.108252s" Sep 13 01:31:08.797799 env[1477]: time="2025-09-13T01:31:08.785736796Z" level=info msg="Start subscribing containerd event" Sep 13 01:31:08.797799 env[1477]: time="2025-09-13T01:31:08.785792317Z" level=info msg="Start recovering state" Sep 13 01:31:08.797799 env[1477]: time="2025-09-13T01:31:08.785861559Z" level=info msg="Start event monitor" Sep 13 01:31:08.797799 env[1477]: time="2025-09-13T01:31:08.785882679Z" level=info msg="Start snapshots syncer" Sep 13 01:31:08.797799 env[1477]: time="2025-09-13T01:31:08.785892719Z" level=info msg="Start cni network conf syncer for default" Sep 13 01:31:08.797799 env[1477]: time="2025-09-13T01:31:08.785904279Z" level=info msg="Start streaming server" Sep 13 01:31:08.784227 systemd[1]: Started containerd.service. Sep 13 01:31:08.870610 bash[1520]: Updated "/home/core/.ssh/authorized_keys" Sep 13 01:31:08.871371 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 01:31:08.910420 systemd[1]: nvidia.service: Deactivated successfully. Sep 13 01:31:09.392839 dbus-daemon[1451]: [system] SELinux support is enabled Sep 13 01:31:09.393025 systemd[1]: Started dbus.service. Sep 13 01:31:09.398243 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 01:31:09.398263 systemd[1]: Reached target system-config.target. Sep 13 01:31:09.403341 update_engine[1463]: I0913 01:31:09.381289 1463 main.cc:92] Flatcar Update Engine starting Sep 13 01:31:09.406113 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 01:31:09.406131 systemd[1]: Reached target user-config.target. Sep 13 01:31:09.415658 systemd[1]: Started systemd-logind.service. Sep 13 01:31:09.488939 systemd[1]: Started update-engine.service. Sep 13 01:31:09.489242 update_engine[1463]: I0913 01:31:09.488991 1463 update_check_scheduler.cc:74] Next update check in 11m55s Sep 13 01:31:09.495082 systemd[1]: Started locksmithd.service. Sep 13 01:31:09.521000 tar[1468]: linux-arm64/README.md Sep 13 01:31:09.525769 systemd[1]: Finished prepare-helm.service. Sep 13 01:31:09.618641 systemd[1]: Started kubelet.service. Sep 13 01:31:10.122003 kubelet[1559]: E0913 01:31:10.121952 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:31:10.123864 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:31:10.123984 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:31:11.305148 locksmithd[1555]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 01:31:11.640554 sshd_keygen[1473]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 01:31:11.663867 systemd[1]: Finished sshd-keygen.service. Sep 13 01:31:11.670285 systemd[1]: Starting issuegen.service... Sep 13 01:31:11.675076 systemd[1]: Started waagent.service. Sep 13 01:31:11.679607 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 01:31:11.679754 systemd[1]: Finished issuegen.service. Sep 13 01:31:11.685030 systemd[1]: Starting systemd-user-sessions.service... Sep 13 01:31:11.743220 systemd[1]: Finished systemd-user-sessions.service. Sep 13 01:31:11.749904 systemd[1]: Started getty@tty1.service. Sep 13 01:31:11.755641 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 13 01:31:11.760577 systemd[1]: Reached target getty.target. Sep 13 01:31:11.764990 systemd[1]: Reached target multi-user.target. Sep 13 01:31:11.771630 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 01:31:11.785109 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 01:31:11.785273 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 01:31:11.790833 systemd[1]: Startup finished in 715ms (kernel) + 16.954s (initrd) + 38.083s (userspace) = 55.753s. Sep 13 01:31:13.027803 login[1584]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Sep 13 01:31:13.060322 login[1583]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 01:31:13.238268 systemd[1]: Created slice user-500.slice. Sep 13 01:31:13.239387 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 01:31:13.241990 systemd-logind[1462]: New session 1 of user core. Sep 13 01:31:13.300082 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 01:31:13.301472 systemd[1]: Starting user@500.service... Sep 13 01:31:13.372695 (systemd)[1587]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:31:14.127132 login[1584]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 01:31:14.131745 systemd-logind[1462]: New session 2 of user core. Sep 13 01:31:14.229083 systemd[1587]: Queued start job for default target default.target. Sep 13 01:31:14.229624 systemd[1587]: Reached target paths.target. Sep 13 01:31:14.229645 systemd[1587]: Reached target sockets.target. Sep 13 01:31:14.229658 systemd[1587]: Reached target timers.target. Sep 13 01:31:14.229668 systemd[1587]: Reached target basic.target. Sep 13 01:31:14.229722 systemd[1587]: Reached target default.target. Sep 13 01:31:14.229749 systemd[1587]: Startup finished in 851ms. Sep 13 01:31:14.229848 systemd[1]: Started user@500.service. Sep 13 01:31:14.230788 systemd[1]: Started session-1.scope. Sep 13 01:31:14.231333 systemd[1]: Started session-2.scope. Sep 13 01:31:20.177478 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 01:31:20.177673 systemd[1]: Stopped kubelet.service. Sep 13 01:31:20.179028 systemd[1]: Starting kubelet.service... Sep 13 01:31:20.653653 systemd[1]: Started kubelet.service. Sep 13 01:31:20.690541 kubelet[1613]: E0913 01:31:20.690490 1613 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:31:20.693213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:31:20.693328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:31:23.403026 waagent[1580]: 2025-09-13T01:31:23.402914Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Sep 13 01:31:23.441580 waagent[1580]: 2025-09-13T01:31:23.441491Z INFO Daemon Daemon OS: flatcar 3510.3.8 Sep 13 01:31:23.446441 waagent[1580]: 2025-09-13T01:31:23.446371Z INFO Daemon Daemon Python: 3.9.16 Sep 13 01:31:23.451211 waagent[1580]: 2025-09-13T01:31:23.451125Z INFO Daemon Daemon Run daemon Sep 13 01:31:23.455677 waagent[1580]: 2025-09-13T01:31:23.455616Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Sep 13 01:31:23.493449 waagent[1580]: 2025-09-13T01:31:23.493311Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 13 01:31:23.508069 waagent[1580]: 2025-09-13T01:31:23.507941Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 13 01:31:23.517478 waagent[1580]: 2025-09-13T01:31:23.517418Z INFO Daemon Daemon cloud-init is enabled: False Sep 13 01:31:23.522692 waagent[1580]: 2025-09-13T01:31:23.522633Z INFO Daemon Daemon Using waagent for provisioning Sep 13 01:31:23.528299 waagent[1580]: 2025-09-13T01:31:23.528239Z INFO Daemon Daemon Activate resource disk Sep 13 01:31:23.532983 waagent[1580]: 2025-09-13T01:31:23.532928Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 13 01:31:23.547070 waagent[1580]: 2025-09-13T01:31:23.547010Z INFO Daemon Daemon Found device: None Sep 13 01:31:23.551530 waagent[1580]: 2025-09-13T01:31:23.551471Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 13 01:31:23.559816 waagent[1580]: 2025-09-13T01:31:23.559759Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 13 01:31:23.572220 waagent[1580]: 2025-09-13T01:31:23.572160Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 01:31:23.578465 waagent[1580]: 2025-09-13T01:31:23.578407Z INFO Daemon Daemon Running default provisioning handler Sep 13 01:31:23.591956 waagent[1580]: 2025-09-13T01:31:23.591825Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 13 01:31:23.607209 waagent[1580]: 2025-09-13T01:31:23.607076Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 13 01:31:23.617440 waagent[1580]: 2025-09-13T01:31:23.617342Z INFO Daemon Daemon cloud-init is enabled: False Sep 13 01:31:23.622736 waagent[1580]: 2025-09-13T01:31:23.622673Z INFO Daemon Daemon Copying ovf-env.xml Sep 13 01:31:23.790898 waagent[1580]: 2025-09-13T01:31:23.790161Z INFO Daemon Daemon Successfully mounted dvd Sep 13 01:31:23.977635 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 13 01:31:24.036241 waagent[1580]: 2025-09-13T01:31:24.036090Z INFO Daemon Daemon Detect protocol endpoint Sep 13 01:31:24.041237 waagent[1580]: 2025-09-13T01:31:24.041126Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 01:31:24.047072 waagent[1580]: 2025-09-13T01:31:24.046998Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 13 01:31:24.053787 waagent[1580]: 2025-09-13T01:31:24.053720Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 13 01:31:24.060514 waagent[1580]: 2025-09-13T01:31:24.060449Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 13 01:31:24.065973 waagent[1580]: 2025-09-13T01:31:24.065913Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 13 01:31:24.290919 waagent[1580]: 2025-09-13T01:31:24.290838Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 13 01:31:24.298339 waagent[1580]: 2025-09-13T01:31:24.298254Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 13 01:31:24.303533 waagent[1580]: 2025-09-13T01:31:24.303471Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 13 01:31:25.371814 waagent[1580]: 2025-09-13T01:31:25.371661Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 13 01:31:25.386923 waagent[1580]: 2025-09-13T01:31:25.386846Z INFO Daemon Daemon Forcing an update of the goal state.. Sep 13 01:31:25.392503 waagent[1580]: 2025-09-13T01:31:25.392438Z INFO Daemon Daemon Fetching goal state [incarnation 1] Sep 13 01:31:25.542166 waagent[1580]: 2025-09-13T01:31:25.542027Z INFO Daemon Daemon Found private key matching thumbprint EE2DC3363202589D25E9D5AC14143355D7702204 Sep 13 01:31:25.550599 waagent[1580]: 2025-09-13T01:31:25.550514Z INFO Daemon Daemon Fetch goal state completed Sep 13 01:31:25.601777 waagent[1580]: 2025-09-13T01:31:25.601721Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 10590230-7d95-494a-b3cd-220b1535eb21 New eTag: 2205640609447785404] Sep 13 01:31:25.612601 waagent[1580]: 2025-09-13T01:31:25.612525Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Sep 13 01:31:25.661046 waagent[1580]: 2025-09-13T01:31:25.660922Z INFO Daemon Daemon Starting provisioning Sep 13 01:31:25.666058 waagent[1580]: 2025-09-13T01:31:25.665985Z INFO Daemon Daemon Handle ovf-env.xml. Sep 13 01:31:25.670677 waagent[1580]: 2025-09-13T01:31:25.670614Z INFO Daemon Daemon Set hostname [ci-3510.3.8-n-a3199d6d1b] Sep 13 01:31:25.739206 waagent[1580]: 2025-09-13T01:31:25.739082Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-n-a3199d6d1b] Sep 13 01:31:25.745747 waagent[1580]: 2025-09-13T01:31:25.745667Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 13 01:31:25.752071 waagent[1580]: 2025-09-13T01:31:25.752007Z INFO Daemon Daemon Primary interface is [eth0] Sep 13 01:31:25.769288 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Sep 13 01:31:25.769465 systemd[1]: Stopped systemd-networkd-wait-online.service. Sep 13 01:31:25.769520 systemd[1]: Stopping systemd-networkd-wait-online.service... Sep 13 01:31:25.769781 systemd[1]: Stopping systemd-networkd.service... Sep 13 01:31:25.774654 systemd-networkd[1248]: eth0: DHCPv6 lease lost Sep 13 01:31:25.776291 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 01:31:25.776453 systemd[1]: Stopped systemd-networkd.service. Sep 13 01:31:25.778413 systemd[1]: Starting systemd-networkd.service... Sep 13 01:31:25.806961 systemd-networkd[1640]: enP4628s1: Link UP Sep 13 01:31:25.806973 systemd-networkd[1640]: enP4628s1: Gained carrier Sep 13 01:31:25.808067 systemd-networkd[1640]: eth0: Link UP Sep 13 01:31:25.808077 systemd-networkd[1640]: eth0: Gained carrier Sep 13 01:31:25.808420 systemd-networkd[1640]: lo: Link UP Sep 13 01:31:25.808429 systemd-networkd[1640]: lo: Gained carrier Sep 13 01:31:25.808696 systemd-networkd[1640]: eth0: Gained IPv6LL Sep 13 01:31:25.809149 systemd-networkd[1640]: Enumeration completed Sep 13 01:31:25.809258 systemd[1]: Started systemd-networkd.service. Sep 13 01:31:25.810246 systemd-networkd[1640]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:31:25.811012 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 01:31:25.815291 waagent[1580]: 2025-09-13T01:31:25.814658Z INFO Daemon Daemon Create user account if not exists Sep 13 01:31:25.821712 waagent[1580]: 2025-09-13T01:31:25.821632Z INFO Daemon Daemon User core already exists, skip useradd Sep 13 01:31:25.827500 waagent[1580]: 2025-09-13T01:31:25.827432Z INFO Daemon Daemon Configure sudoer Sep 13 01:31:25.831673 systemd-networkd[1640]: eth0: DHCPv4 address 10.200.20.47/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 01:31:25.835158 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 01:31:25.847893 waagent[1580]: 2025-09-13T01:31:25.847813Z INFO Daemon Daemon Configure sshd Sep 13 01:31:25.852106 waagent[1580]: 2025-09-13T01:31:25.852042Z INFO Daemon Daemon Deploy ssh public key. Sep 13 01:31:27.197512 waagent[1580]: 2025-09-13T01:31:27.197424Z INFO Daemon Daemon Provisioning complete Sep 13 01:31:27.216779 waagent[1580]: 2025-09-13T01:31:27.216713Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 13 01:31:27.222785 waagent[1580]: 2025-09-13T01:31:27.222721Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 13 01:31:27.233225 waagent[1580]: 2025-09-13T01:31:27.233161Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Sep 13 01:31:27.528268 waagent[1646]: 2025-09-13T01:31:27.528124Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Sep 13 01:31:27.529330 waagent[1646]: 2025-09-13T01:31:27.529268Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:31:27.529611 waagent[1646]: 2025-09-13T01:31:27.529541Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:31:27.542294 waagent[1646]: 2025-09-13T01:31:27.542226Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Sep 13 01:31:27.542566 waagent[1646]: 2025-09-13T01:31:27.542517Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Sep 13 01:31:27.600337 waagent[1646]: 2025-09-13T01:31:27.600212Z INFO ExtHandler ExtHandler Found private key matching thumbprint EE2DC3363202589D25E9D5AC14143355D7702204 Sep 13 01:31:27.600811 waagent[1646]: 2025-09-13T01:31:27.600756Z INFO ExtHandler ExtHandler Fetch goal state completed Sep 13 01:31:27.614914 waagent[1646]: 2025-09-13T01:31:27.614860Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: f59658ed-54a8-4d42-bf6a-4831a4e9bd16 New eTag: 2205640609447785404] Sep 13 01:31:27.615603 waagent[1646]: 2025-09-13T01:31:27.615537Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Sep 13 01:31:27.800929 waagent[1646]: 2025-09-13T01:31:27.800744Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 13 01:31:27.826548 waagent[1646]: 2025-09-13T01:31:27.826467Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1646 Sep 13 01:31:27.830349 waagent[1646]: 2025-09-13T01:31:27.830290Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 13 01:31:27.831688 waagent[1646]: 2025-09-13T01:31:27.831631Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 13 01:31:28.064129 waagent[1646]: 2025-09-13T01:31:28.064018Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 13 01:31:28.064754 waagent[1646]: 2025-09-13T01:31:28.064694Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 13 01:31:28.072650 waagent[1646]: 2025-09-13T01:31:28.072571Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 13 01:31:28.073307 waagent[1646]: 2025-09-13T01:31:28.073250Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 13 01:31:28.074642 waagent[1646]: 2025-09-13T01:31:28.074557Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Sep 13 01:31:28.076076 waagent[1646]: 2025-09-13T01:31:28.076007Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 13 01:31:28.076324 waagent[1646]: 2025-09-13T01:31:28.076252Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:31:28.076971 waagent[1646]: 2025-09-13T01:31:28.076870Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:31:28.077580 waagent[1646]: 2025-09-13T01:31:28.077513Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 13 01:31:28.077936 waagent[1646]: 2025-09-13T01:31:28.077874Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 13 01:31:28.077936 waagent[1646]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 13 01:31:28.077936 waagent[1646]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 13 01:31:28.077936 waagent[1646]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 13 01:31:28.077936 waagent[1646]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:31:28.077936 waagent[1646]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:31:28.077936 waagent[1646]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:31:28.080383 waagent[1646]: 2025-09-13T01:31:28.080220Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 13 01:31:28.080947 waagent[1646]: 2025-09-13T01:31:28.080870Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:31:28.081729 waagent[1646]: 2025-09-13T01:31:28.081659Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:31:28.082321 waagent[1646]: 2025-09-13T01:31:28.082245Z INFO EnvHandler ExtHandler Configure routes Sep 13 01:31:28.082469 waagent[1646]: 2025-09-13T01:31:28.082423Z INFO EnvHandler ExtHandler Gateway:None Sep 13 01:31:28.082583 waagent[1646]: 2025-09-13T01:31:28.082540Z INFO EnvHandler ExtHandler Routes:None Sep 13 01:31:28.083453 waagent[1646]: 2025-09-13T01:31:28.083395Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 13 01:31:28.083613 waagent[1646]: 2025-09-13T01:31:28.083530Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 13 01:31:28.084348 waagent[1646]: 2025-09-13T01:31:28.084255Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 13 01:31:28.084538 waagent[1646]: 2025-09-13T01:31:28.084466Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 13 01:31:28.084857 waagent[1646]: 2025-09-13T01:31:28.084788Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 13 01:31:28.096204 waagent[1646]: 2025-09-13T01:31:28.096129Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Sep 13 01:31:28.097032 waagent[1646]: 2025-09-13T01:31:28.096979Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 13 01:31:28.098926 waagent[1646]: 2025-09-13T01:31:28.098867Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Sep 13 01:31:28.146376 waagent[1646]: 2025-09-13T01:31:28.146303Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Sep 13 01:31:28.154697 waagent[1646]: 2025-09-13T01:31:28.154525Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1640' Sep 13 01:31:28.293082 waagent[1646]: 2025-09-13T01:31:28.292952Z INFO MonitorHandler ExtHandler Network interfaces: Sep 13 01:31:28.293082 waagent[1646]: Executing ['ip', '-a', '-o', 'link']: Sep 13 01:31:28.293082 waagent[1646]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 13 01:31:28.293082 waagent[1646]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:60:0d brd ff:ff:ff:ff:ff:ff Sep 13 01:31:28.293082 waagent[1646]: 3: enP4628s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:60:0d brd ff:ff:ff:ff:ff:ff\ altname enP4628p0s2 Sep 13 01:31:28.293082 waagent[1646]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 13 01:31:28.293082 waagent[1646]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 13 01:31:28.293082 waagent[1646]: 2: eth0 inet 10.200.20.47/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 13 01:31:28.293082 waagent[1646]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 13 01:31:28.293082 waagent[1646]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 13 01:31:28.293082 waagent[1646]: 2: eth0 inet6 fe80::222:48ff:fe7a:600d/64 scope link \ valid_lft forever preferred_lft forever Sep 13 01:31:28.534157 waagent[1646]: 2025-09-13T01:31:28.534093Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.14.0.1 -- exiting Sep 13 01:31:29.236948 waagent[1580]: 2025-09-13T01:31:29.236832Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Sep 13 01:31:29.242306 waagent[1580]: 2025-09-13T01:31:29.242252Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.14.0.1 to be the latest agent Sep 13 01:31:30.525948 waagent[1675]: 2025-09-13T01:31:30.525855Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.14.0.1) Sep 13 01:31:30.526995 waagent[1675]: 2025-09-13T01:31:30.526938Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Sep 13 01:31:30.527246 waagent[1675]: 2025-09-13T01:31:30.527196Z INFO ExtHandler ExtHandler Python: 3.9.16 Sep 13 01:31:30.527477 waagent[1675]: 2025-09-13T01:31:30.527431Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 13 01:31:30.541322 waagent[1675]: 2025-09-13T01:31:30.541225Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 13 01:31:30.541894 waagent[1675]: 2025-09-13T01:31:30.541839Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:31:30.542160 waagent[1675]: 2025-09-13T01:31:30.542113Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:31:30.542485 waagent[1675]: 2025-09-13T01:31:30.542434Z INFO ExtHandler ExtHandler Initializing the goal state... Sep 13 01:31:30.555969 waagent[1675]: 2025-09-13T01:31:30.555902Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 13 01:31:30.568725 waagent[1675]: 2025-09-13T01:31:30.568664Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 13 01:31:30.569968 waagent[1675]: 2025-09-13T01:31:30.569912Z INFO ExtHandler Sep 13 01:31:30.570239 waagent[1675]: 2025-09-13T01:31:30.570189Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 158c472f-3552-499a-8cf1-7ba314bc462b eTag: 2205640609447785404 source: Fabric] Sep 13 01:31:30.571120 waagent[1675]: 2025-09-13T01:31:30.571065Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 13 01:31:30.572460 waagent[1675]: 2025-09-13T01:31:30.572402Z INFO ExtHandler Sep 13 01:31:30.572732 waagent[1675]: 2025-09-13T01:31:30.572681Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 13 01:31:30.580011 waagent[1675]: 2025-09-13T01:31:30.579965Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 13 01:31:30.580689 waagent[1675]: 2025-09-13T01:31:30.580640Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 13 01:31:30.601070 waagent[1675]: 2025-09-13T01:31:30.601006Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Sep 13 01:31:30.666692 waagent[1675]: 2025-09-13T01:31:30.666540Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EE2DC3363202589D25E9D5AC14143355D7702204', 'hasPrivateKey': True} Sep 13 01:31:30.668288 waagent[1675]: 2025-09-13T01:31:30.668225Z INFO ExtHandler Fetch goal state from WireServer completed Sep 13 01:31:30.669320 waagent[1675]: 2025-09-13T01:31:30.669263Z INFO ExtHandler ExtHandler Goal state initialization completed. Sep 13 01:31:30.690092 waagent[1675]: 2025-09-13T01:31:30.689991Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Sep 13 01:31:30.698202 waagent[1675]: 2025-09-13T01:31:30.698099Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 13 01:31:30.702301 waagent[1675]: 2025-09-13T01:31:30.702198Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Sep 13 01:31:30.702679 waagent[1675]: 2025-09-13T01:31:30.702624Z INFO ExtHandler ExtHandler Checking state of the firewall Sep 13 01:31:30.927519 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 01:31:30.927703 systemd[1]: Stopped kubelet.service. Sep 13 01:31:30.929022 systemd[1]: Starting kubelet.service... Sep 13 01:31:30.951131 waagent[1675]: 2025-09-13T01:31:30.951002Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: Sep 13 01:31:30.951131 waagent[1675]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:31:30.951131 waagent[1675]: pkts bytes target prot opt in out source destination Sep 13 01:31:30.951131 waagent[1675]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:31:30.951131 waagent[1675]: pkts bytes target prot opt in out source destination Sep 13 01:31:30.951131 waagent[1675]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:31:30.951131 waagent[1675]: pkts bytes target prot opt in out source destination Sep 13 01:31:30.951131 waagent[1675]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 13 01:31:30.951131 waagent[1675]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 01:31:30.951131 waagent[1675]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 13 01:31:30.952712 waagent[1675]: 2025-09-13T01:31:30.952646Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Sep 13 01:31:30.956217 waagent[1675]: 2025-09-13T01:31:30.956104Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Sep 13 01:31:30.956666 waagent[1675]: 2025-09-13T01:31:30.956579Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 13 01:31:30.957148 waagent[1675]: 2025-09-13T01:31:30.957090Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 13 01:31:30.966458 waagent[1675]: 2025-09-13T01:31:30.966399Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 13 01:31:30.967231 waagent[1675]: 2025-09-13T01:31:30.967170Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 13 01:31:30.975954 waagent[1675]: 2025-09-13T01:31:30.975880Z INFO ExtHandler ExtHandler WALinuxAgent-2.14.0.1 running as process 1675 Sep 13 01:31:30.979491 waagent[1675]: 2025-09-13T01:31:30.979423Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 13 01:31:30.980487 waagent[1675]: 2025-09-13T01:31:30.980430Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Sep 13 01:31:30.981536 waagent[1675]: 2025-09-13T01:31:30.981478Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 13 01:31:30.984459 waagent[1675]: 2025-09-13T01:31:30.984401Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Sep 13 01:31:30.984950 waagent[1675]: 2025-09-13T01:31:30.984892Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 13 01:31:30.986436 waagent[1675]: 2025-09-13T01:31:30.986379Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 13 01:31:30.986999 waagent[1675]: 2025-09-13T01:31:30.986945Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:31:30.987254 waagent[1675]: 2025-09-13T01:31:30.987205Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:31:30.987944 waagent[1675]: 2025-09-13T01:31:30.987892Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 13 01:31:30.988391 waagent[1675]: 2025-09-13T01:31:30.988337Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 13 01:31:30.988391 waagent[1675]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 13 01:31:30.988391 waagent[1675]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 13 01:31:30.988391 waagent[1675]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 13 01:31:30.988391 waagent[1675]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:31:30.988391 waagent[1675]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:31:30.988391 waagent[1675]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:31:30.991322 waagent[1675]: 2025-09-13T01:31:30.991196Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 13 01:31:30.992159 waagent[1675]: 2025-09-13T01:31:30.992103Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:31:30.992450 waagent[1675]: 2025-09-13T01:31:30.992398Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:31:30.993085 waagent[1675]: 2025-09-13T01:31:30.993026Z INFO EnvHandler ExtHandler Configure routes Sep 13 01:31:30.993345 waagent[1675]: 2025-09-13T01:31:30.993295Z INFO EnvHandler ExtHandler Gateway:None Sep 13 01:31:30.993568 waagent[1675]: 2025-09-13T01:31:30.993522Z INFO EnvHandler ExtHandler Routes:None Sep 13 01:31:30.994675 waagent[1675]: 2025-09-13T01:31:30.994626Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 13 01:31:30.997081 waagent[1675]: 2025-09-13T01:31:30.994498Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 13 01:31:31.008215 waagent[1675]: 2025-09-13T01:31:31.007127Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 13 01:31:31.008215 waagent[1675]: 2025-09-13T01:31:31.007628Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 13 01:31:31.013318 waagent[1675]: 2025-09-13T01:31:31.013234Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 13 01:31:31.025354 waagent[1675]: 2025-09-13T01:31:31.025273Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 13 01:31:31.026614 waagent[1675]: 2025-09-13T01:31:31.026534Z INFO MonitorHandler ExtHandler Network interfaces: Sep 13 01:31:31.026614 waagent[1675]: Executing ['ip', '-a', '-o', 'link']: Sep 13 01:31:31.026614 waagent[1675]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 13 01:31:31.026614 waagent[1675]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:60:0d brd ff:ff:ff:ff:ff:ff Sep 13 01:31:31.026614 waagent[1675]: 3: enP4628s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:60:0d brd ff:ff:ff:ff:ff:ff\ altname enP4628p0s2 Sep 13 01:31:31.026614 waagent[1675]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 13 01:31:31.026614 waagent[1675]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 13 01:31:31.026614 waagent[1675]: 2: eth0 inet 10.200.20.47/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 13 01:31:31.026614 waagent[1675]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 13 01:31:31.026614 waagent[1675]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 13 01:31:31.026614 waagent[1675]: 2: eth0 inet6 fe80::222:48ff:fe7a:600d/64 scope link \ valid_lft forever preferred_lft forever Sep 13 01:31:31.035135 waagent[1675]: 2025-09-13T01:31:31.035063Z INFO ExtHandler ExtHandler Downloading agent manifest Sep 13 01:31:31.052621 waagent[1675]: 2025-09-13T01:31:31.052520Z INFO ExtHandler ExtHandler Sep 13 01:31:31.053752 waagent[1675]: 2025-09-13T01:31:31.053691Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a7d06992-2a2f-45f6-8d35-0bbec07a3038 correlation 8a6c2364-d469-49ed-ad2f-80d42c881569 created: 2025-09-13T01:29:25.509300Z] Sep 13 01:31:31.057050 waagent[1675]: 2025-09-13T01:31:31.056982Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 13 01:31:31.062648 waagent[1675]: 2025-09-13T01:31:31.062566Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 10 ms] Sep 13 01:31:31.089159 waagent[1675]: 2025-09-13T01:31:31.089093Z INFO ExtHandler ExtHandler Looking for existing remote access users. Sep 13 01:31:31.090019 systemd[1]: Started kubelet.service. Sep 13 01:31:31.093581 waagent[1675]: 2025-09-13T01:31:31.093338Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.14.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 62802345-80A1-42EA-9CAC-3C970F12294C;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Sep 13 01:31:31.150676 kubelet[1717]: E0913 01:31:31.150638 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:31:31.152932 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:31:31.153058 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:31:31.517077 waagent[1675]: 2025-09-13T01:31:31.517004Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 13 01:31:41.177525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 01:31:41.177724 systemd[1]: Stopped kubelet.service. Sep 13 01:31:41.179080 systemd[1]: Starting kubelet.service... Sep 13 01:31:41.270280 systemd[1]: Started kubelet.service. Sep 13 01:31:41.401940 kubelet[1729]: E0913 01:31:41.401886 1729 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:31:41.404092 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:31:41.404206 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:31:41.814437 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 13 01:31:42.480116 systemd[1]: Created slice system-sshd.slice. Sep 13 01:31:42.481577 systemd[1]: Started sshd@0-10.200.20.47:22-10.200.16.10:54062.service. Sep 13 01:31:43.175667 sshd[1736]: Accepted publickey for core from 10.200.16.10 port 54062 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:31:43.196112 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:31:43.199635 systemd-logind[1462]: New session 3 of user core. Sep 13 01:31:43.200411 systemd[1]: Started session-3.scope. Sep 13 01:31:43.545535 systemd[1]: Started sshd@1-10.200.20.47:22-10.200.16.10:54070.service. Sep 13 01:31:43.958897 sshd[1741]: Accepted publickey for core from 10.200.16.10 port 54070 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:31:43.960455 sshd[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:31:43.964494 systemd[1]: Started session-4.scope. Sep 13 01:31:43.965373 systemd-logind[1462]: New session 4 of user core. Sep 13 01:31:44.265209 sshd[1741]: pam_unix(sshd:session): session closed for user core Sep 13 01:31:44.267628 systemd[1]: sshd@1-10.200.20.47:22-10.200.16.10:54070.service: Deactivated successfully. Sep 13 01:31:44.268329 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 01:31:44.268829 systemd-logind[1462]: Session 4 logged out. Waiting for processes to exit. Sep 13 01:31:44.269733 systemd-logind[1462]: Removed session 4. Sep 13 01:31:44.333301 systemd[1]: Started sshd@2-10.200.20.47:22-10.200.16.10:54086.service. Sep 13 01:31:44.743646 sshd[1747]: Accepted publickey for core from 10.200.16.10 port 54086 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:31:44.744902 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:31:44.749116 systemd[1]: Started session-5.scope. Sep 13 01:31:44.749412 systemd-logind[1462]: New session 5 of user core. Sep 13 01:31:45.046634 sshd[1747]: pam_unix(sshd:session): session closed for user core Sep 13 01:31:45.049243 systemd[1]: sshd@2-10.200.20.47:22-10.200.16.10:54086.service: Deactivated successfully. Sep 13 01:31:45.049942 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 01:31:45.050469 systemd-logind[1462]: Session 5 logged out. Waiting for processes to exit. Sep 13 01:31:45.051329 systemd-logind[1462]: Removed session 5. Sep 13 01:31:45.114527 systemd[1]: Started sshd@3-10.200.20.47:22-10.200.16.10:54098.service. Sep 13 01:31:45.521740 sshd[1753]: Accepted publickey for core from 10.200.16.10 port 54098 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:31:45.523005 sshd[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:31:45.526845 systemd-logind[1462]: New session 6 of user core. Sep 13 01:31:45.527208 systemd[1]: Started session-6.scope. Sep 13 01:31:45.827672 sshd[1753]: pam_unix(sshd:session): session closed for user core Sep 13 01:31:45.830216 systemd[1]: sshd@3-10.200.20.47:22-10.200.16.10:54098.service: Deactivated successfully. Sep 13 01:31:45.830899 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 01:31:45.831426 systemd-logind[1462]: Session 6 logged out. Waiting for processes to exit. Sep 13 01:31:45.832206 systemd-logind[1462]: Removed session 6. Sep 13 01:31:45.895666 systemd[1]: Started sshd@4-10.200.20.47:22-10.200.16.10:54102.service. Sep 13 01:31:46.305477 sshd[1759]: Accepted publickey for core from 10.200.16.10 port 54102 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:31:46.308196 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:31:46.312290 systemd[1]: Started session-7.scope. Sep 13 01:31:46.313414 systemd-logind[1462]: New session 7 of user core. Sep 13 01:31:47.016196 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 01:31:47.016414 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 01:31:47.053800 systemd[1]: Starting docker.service... Sep 13 01:31:47.120780 env[1772]: time="2025-09-13T01:31:47.120732877Z" level=info msg="Starting up" Sep 13 01:31:47.122290 env[1772]: time="2025-09-13T01:31:47.122255159Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 01:31:47.122290 env[1772]: time="2025-09-13T01:31:47.122284359Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 01:31:47.122417 env[1772]: time="2025-09-13T01:31:47.122313239Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 01:31:47.122417 env[1772]: time="2025-09-13T01:31:47.122324279Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 01:31:47.124095 env[1772]: time="2025-09-13T01:31:47.124067362Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 01:31:47.124095 env[1772]: time="2025-09-13T01:31:47.124090962Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 01:31:47.124181 env[1772]: time="2025-09-13T01:31:47.124106642Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 01:31:47.124181 env[1772]: time="2025-09-13T01:31:47.124114802Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 01:31:47.129561 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1383817042-merged.mount: Deactivated successfully. Sep 13 01:31:47.212553 env[1772]: time="2025-09-13T01:31:47.212063108Z" level=info msg="Loading containers: start." Sep 13 01:31:47.503609 kernel: Initializing XFRM netlink socket Sep 13 01:31:47.541633 env[1772]: time="2025-09-13T01:31:47.541582493Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 01:31:47.747747 systemd-networkd[1640]: docker0: Link UP Sep 13 01:31:47.778700 env[1772]: time="2025-09-13T01:31:47.778608166Z" level=info msg="Loading containers: done." Sep 13 01:31:47.788199 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1667803507-merged.mount: Deactivated successfully. Sep 13 01:31:47.806580 env[1772]: time="2025-09-13T01:31:47.806530172Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 01:31:47.806773 env[1772]: time="2025-09-13T01:31:47.806742052Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 01:31:47.806888 env[1772]: time="2025-09-13T01:31:47.806865252Z" level=info msg="Daemon has completed initialization" Sep 13 01:31:47.852718 systemd[1]: Started docker.service. Sep 13 01:31:47.854816 env[1772]: time="2025-09-13T01:31:47.854771732Z" level=info msg="API listen on /run/docker.sock" Sep 13 01:31:51.427508 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 01:31:51.427697 systemd[1]: Stopped kubelet.service. Sep 13 01:31:51.429080 systemd[1]: Starting kubelet.service... Sep 13 01:31:51.766517 systemd[1]: Started kubelet.service. Sep 13 01:31:51.799300 kubelet[1892]: E0913 01:31:51.799260 1892 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:31:51.801360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:31:51.801486 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:31:52.117861 env[1477]: time="2025-09-13T01:31:52.117758164Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 01:31:52.897459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount311960268.mount: Deactivated successfully. Sep 13 01:31:54.503039 env[1477]: time="2025-09-13T01:31:54.502993075Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:31:54.512629 env[1477]: time="2025-09-13T01:31:54.512544725Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:31:54.519451 env[1477]: time="2025-09-13T01:31:54.519419373Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:31:54.526456 env[1477]: time="2025-09-13T01:31:54.526427420Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:31:54.527359 env[1477]: time="2025-09-13T01:31:54.527334221Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 13 01:31:54.529622 env[1477]: time="2025-09-13T01:31:54.529580823Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 01:31:55.046622 update_engine[1463]: I0913 01:31:55.046229 1463 update_attempter.cc:509] Updating boot flags... Sep 13 01:31:56.950752 env[1477]: time="2025-09-13T01:31:56.950697388Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:31:56.957376 env[1477]: time="2025-09-13T01:31:56.957332714Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:31:56.961277 env[1477]: time="2025-09-13T01:31:56.961236998Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:31:56.965253 env[1477]: time="2025-09-13T01:31:56.965217442Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:31:56.966090 env[1477]: time="2025-09-13T01:31:56.966059642Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 13 01:31:56.966765 env[1477]: time="2025-09-13T01:31:56.966736123Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 01:31:58.840966 env[1477]: time="2025-09-13T01:31:58.840911667Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:31:58.849124 env[1477]: time="2025-09-13T01:31:58.849090873Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:31:58.854818 env[1477]: time="2025-09-13T01:31:58.854782238Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:31:58.860191 env[1477]: time="2025-09-13T01:31:58.860154922Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:31:58.861010 env[1477]: time="2025-09-13T01:31:58.860977963Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 13 01:31:58.861723 env[1477]: time="2025-09-13T01:31:58.861697004Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 01:32:00.215520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3278561442.mount: Deactivated successfully. Sep 13 01:32:01.093014 env[1477]: time="2025-09-13T01:32:01.092965977Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:01.100215 env[1477]: time="2025-09-13T01:32:01.100185382Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:01.103784 env[1477]: time="2025-09-13T01:32:01.103759385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:01.108905 env[1477]: time="2025-09-13T01:32:01.108883028Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:01.109326 env[1477]: time="2025-09-13T01:32:01.109300348Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 13 01:32:01.109993 env[1477]: time="2025-09-13T01:32:01.109969909Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 01:32:01.796556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3527666536.mount: Deactivated successfully. Sep 13 01:32:01.927518 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 13 01:32:01.927705 systemd[1]: Stopped kubelet.service. Sep 13 01:32:01.929199 systemd[1]: Starting kubelet.service... Sep 13 01:32:02.043153 systemd[1]: Started kubelet.service. Sep 13 01:32:02.143143 kubelet[1942]: E0913 01:32:02.143038 1942 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:32:02.145223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:32:02.145346 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:32:03.663875 env[1477]: time="2025-09-13T01:32:03.663810558Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:03.673649 env[1477]: time="2025-09-13T01:32:03.673604717Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:03.762827 env[1477]: time="2025-09-13T01:32:03.762766949Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:04.917205 env[1477]: time="2025-09-13T01:32:04.917152783Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:04.918450 env[1477]: time="2025-09-13T01:32:04.918422895Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 13 01:32:04.919088 env[1477]: time="2025-09-13T01:32:04.919052131Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 01:32:05.753222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3755076208.mount: Deactivated successfully. Sep 13 01:32:05.778908 env[1477]: time="2025-09-13T01:32:05.778860261Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:05.789205 env[1477]: time="2025-09-13T01:32:05.789163880Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:05.792979 env[1477]: time="2025-09-13T01:32:05.792953657Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:05.796496 env[1477]: time="2025-09-13T01:32:05.796459957Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:05.797091 env[1477]: time="2025-09-13T01:32:05.797064353Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 01:32:05.797522 env[1477]: time="2025-09-13T01:32:05.797500710Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 01:32:06.391029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1953503692.mount: Deactivated successfully. Sep 13 01:32:09.350159 env[1477]: time="2025-09-13T01:32:09.350097562Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:09.356640 env[1477]: time="2025-09-13T01:32:09.356573167Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:09.360435 env[1477]: time="2025-09-13T01:32:09.360410227Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:09.364378 env[1477]: time="2025-09-13T01:32:09.364354526Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:09.365135 env[1477]: time="2025-09-13T01:32:09.365103362Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 13 01:32:12.177504 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 13 01:32:12.177688 systemd[1]: Stopped kubelet.service. Sep 13 01:32:12.179004 systemd[1]: Starting kubelet.service... Sep 13 01:32:12.502500 systemd[1]: Started kubelet.service. Sep 13 01:32:12.542404 kubelet[1970]: E0913 01:32:12.542365 1970 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:32:12.544148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:32:12.544271 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:32:14.382520 systemd[1]: Stopped kubelet.service. Sep 13 01:32:14.384490 systemd[1]: Starting kubelet.service... Sep 13 01:32:14.415267 systemd[1]: Reloading. Sep 13 01:32:14.490708 /usr/lib/systemd/system-generators/torcx-generator[2003]: time="2025-09-13T01:32:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:32:14.493664 /usr/lib/systemd/system-generators/torcx-generator[2003]: time="2025-09-13T01:32:14Z" level=info msg="torcx already run" Sep 13 01:32:14.567780 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:32:14.567981 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:32:14.584245 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:32:14.685338 systemd[1]: Started kubelet.service. Sep 13 01:32:14.686618 systemd[1]: Stopping kubelet.service... Sep 13 01:32:14.686858 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 01:32:14.687019 systemd[1]: Stopped kubelet.service. Sep 13 01:32:14.688484 systemd[1]: Starting kubelet.service... Sep 13 01:32:15.591388 systemd[1]: Started kubelet.service. Sep 13 01:32:15.622410 kubelet[2069]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:32:15.622825 kubelet[2069]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 01:32:15.622904 kubelet[2069]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:32:15.623056 kubelet[2069]: I0913 01:32:15.623027 2069 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 01:32:16.277383 kubelet[2069]: I0913 01:32:16.277343 2069 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 01:32:16.277383 kubelet[2069]: I0913 01:32:16.277375 2069 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 01:32:16.277650 kubelet[2069]: I0913 01:32:16.277632 2069 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 01:32:16.307689 kubelet[2069]: E0913 01:32:16.307649 2069 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.47:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 01:32:16.308848 kubelet[2069]: I0913 01:32:16.308820 2069 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:32:16.318403 kubelet[2069]: E0913 01:32:16.318370 2069 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 01:32:16.318686 kubelet[2069]: I0913 01:32:16.318670 2069 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 01:32:16.321705 kubelet[2069]: I0913 01:32:16.321685 2069 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 01:32:16.323410 kubelet[2069]: I0913 01:32:16.323375 2069 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 01:32:16.323681 kubelet[2069]: I0913 01:32:16.323495 2069 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-a3199d6d1b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 01:32:16.323829 kubelet[2069]: I0913 01:32:16.323818 2069 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 01:32:16.323887 kubelet[2069]: I0913 01:32:16.323879 2069 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 01:32:16.324057 kubelet[2069]: I0913 01:32:16.324046 2069 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:32:16.327954 kubelet[2069]: I0913 01:32:16.327937 2069 kubelet.go:480] "Attempting to sync node with API server" Sep 13 01:32:16.328081 kubelet[2069]: I0913 01:32:16.328069 2069 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 01:32:16.328168 kubelet[2069]: I0913 01:32:16.328159 2069 kubelet.go:386] "Adding apiserver pod source" Sep 13 01:32:16.332264 kubelet[2069]: I0913 01:32:16.332241 2069 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 01:32:16.340496 kubelet[2069]: I0913 01:32:16.340469 2069 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 01:32:16.341079 kubelet[2069]: I0913 01:32:16.341044 2069 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 01:32:16.341151 kubelet[2069]: W0913 01:32:16.341107 2069 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 01:32:16.342323 kubelet[2069]: E0913 01:32:16.342291 2069 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 01:32:16.342794 kubelet[2069]: E0913 01:32:16.342609 2069 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-a3199d6d1b&limit=500&resourceVersion=0\": dial tcp 10.200.20.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 01:32:16.342906 kubelet[2069]: I0913 01:32:16.342862 2069 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 01:32:16.343025 kubelet[2069]: I0913 01:32:16.343013 2069 server.go:1289] "Started kubelet" Sep 13 01:32:16.348219 kubelet[2069]: I0913 01:32:16.348186 2069 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 01:32:16.348825 kubelet[2069]: I0913 01:32:16.348791 2069 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 01:32:16.348992 kubelet[2069]: I0913 01:32:16.348960 2069 server.go:317] "Adding debug handlers to kubelet server" Sep 13 01:32:16.349261 kubelet[2069]: I0913 01:32:16.349242 2069 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 01:32:16.353549 kubelet[2069]: E0913 01:32:16.351921 2069 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.47:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.47:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-a3199d6d1b.1864b38485c9d137 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-a3199d6d1b,UID:ci-3510.3.8-n-a3199d6d1b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-a3199d6d1b,},FirstTimestamp:2025-09-13 01:32:16.342987063 +0000 UTC m=+0.747257401,LastTimestamp:2025-09-13 01:32:16.342987063 +0000 UTC m=+0.747257401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-a3199d6d1b,}" Sep 13 01:32:16.362886 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 01:32:16.363216 kubelet[2069]: I0913 01:32:16.363182 2069 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 01:32:16.364062 kubelet[2069]: E0913 01:32:16.364041 2069 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 01:32:16.364903 kubelet[2069]: I0913 01:32:16.364890 2069 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 01:32:16.365098 kubelet[2069]: I0913 01:32:16.365083 2069 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 01:32:16.366320 kubelet[2069]: I0913 01:32:16.366300 2069 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 01:32:16.366453 kubelet[2069]: I0913 01:32:16.366442 2069 reconciler.go:26] "Reconciler: start to sync state" Sep 13 01:32:16.367161 kubelet[2069]: E0913 01:32:16.367135 2069 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 01:32:16.367554 kubelet[2069]: E0913 01:32:16.367534 2069 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-a3199d6d1b\" not found" Sep 13 01:32:16.368040 kubelet[2069]: I0913 01:32:16.368024 2069 factory.go:223] Registration of the systemd container factory successfully Sep 13 01:32:16.368205 kubelet[2069]: I0913 01:32:16.368190 2069 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 01:32:16.368554 kubelet[2069]: E0913 01:32:16.368531 2069 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-a3199d6d1b?timeout=10s\": dial tcp 10.200.20.47:6443: connect: connection refused" interval="200ms" Sep 13 01:32:16.369656 kubelet[2069]: I0913 01:32:16.369637 2069 factory.go:223] Registration of the containerd container factory successfully Sep 13 01:32:16.429872 kubelet[2069]: I0913 01:32:16.429848 2069 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 01:32:16.430078 kubelet[2069]: I0913 01:32:16.430066 2069 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 01:32:16.430179 kubelet[2069]: I0913 01:32:16.430169 2069 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:32:16.442780 kubelet[2069]: I0913 01:32:16.442756 2069 policy_none.go:49] "None policy: Start" Sep 13 01:32:16.442924 kubelet[2069]: I0913 01:32:16.442914 2069 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 01:32:16.442991 kubelet[2069]: I0913 01:32:16.442982 2069 state_mem.go:35] "Initializing new in-memory state store" Sep 13 01:32:16.451846 systemd[1]: Created slice kubepods.slice. Sep 13 01:32:16.455909 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 01:32:16.458719 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 01:32:16.467410 kubelet[2069]: E0913 01:32:16.467389 2069 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 01:32:16.467698 kubelet[2069]: I0913 01:32:16.467684 2069 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 01:32:16.467843 kubelet[2069]: I0913 01:32:16.467812 2069 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 01:32:16.470761 kubelet[2069]: E0913 01:32:16.467929 2069 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-a3199d6d1b\" not found" Sep 13 01:32:16.470927 kubelet[2069]: I0913 01:32:16.470843 2069 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 01:32:16.470927 kubelet[2069]: I0913 01:32:16.468914 2069 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 01:32:16.472095 kubelet[2069]: I0913 01:32:16.472055 2069 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 01:32:16.472095 kubelet[2069]: I0913 01:32:16.472086 2069 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 01:32:16.472206 kubelet[2069]: I0913 01:32:16.472109 2069 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 01:32:16.472206 kubelet[2069]: I0913 01:32:16.472116 2069 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 01:32:16.472206 kubelet[2069]: E0913 01:32:16.472155 2069 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 13 01:32:16.472295 kubelet[2069]: E0913 01:32:16.470400 2069 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 01:32:16.472330 kubelet[2069]: E0913 01:32:16.472311 2069 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-a3199d6d1b\" not found" Sep 13 01:32:16.473873 kubelet[2069]: E0913 01:32:16.473848 2069 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 01:32:16.569849 kubelet[2069]: E0913 01:32:16.569746 2069 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-a3199d6d1b?timeout=10s\": dial tcp 10.200.20.47:6443: connect: connection refused" interval="400ms" Sep 13 01:32:16.572297 kubelet[2069]: I0913 01:32:16.572268 2069 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:16.572769 kubelet[2069]: E0913 01:32:16.572548 2069 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.47:6443/api/v1/nodes\": dial tcp 10.200.20.47:6443: connect: connection refused" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:16.587632 systemd[1]: Created slice kubepods-burstable-podb58dce9829a7621eb11a0baf7d0004e6.slice. Sep 13 01:32:16.595350 kubelet[2069]: E0913 01:32:16.595329 2069 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-a3199d6d1b\" not found" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:16.600077 systemd[1]: Created slice kubepods-burstable-podbffa82a68a0a6a022dbb4508b111e830.slice. Sep 13 01:32:16.602143 kubelet[2069]: E0913 01:32:16.601993 2069 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-a3199d6d1b\" not found" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:16.613645 systemd[1]: Created slice kubepods-burstable-poded9b3dd9c5ad224cb56f8678246f5650.slice. Sep 13 01:32:16.615359 kubelet[2069]: E0913 01:32:16.615326 2069 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-a3199d6d1b\" not found" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:16.668663 kubelet[2069]: I0913 01:32:16.668619 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b58dce9829a7621eb11a0baf7d0004e6-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-a3199d6d1b\" (UID: \"b58dce9829a7621eb11a0baf7d0004e6\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:16.668663 kubelet[2069]: I0913 01:32:16.668666 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b58dce9829a7621eb11a0baf7d0004e6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-a3199d6d1b\" (UID: \"b58dce9829a7621eb11a0baf7d0004e6\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:16.669008 kubelet[2069]: I0913 01:32:16.668695 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bffa82a68a0a6a022dbb4508b111e830-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-a3199d6d1b\" (UID: \"bffa82a68a0a6a022dbb4508b111e830\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:16.669008 kubelet[2069]: I0913 01:32:16.668715 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bffa82a68a0a6a022dbb4508b111e830-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-a3199d6d1b\" (UID: \"bffa82a68a0a6a022dbb4508b111e830\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:16.669008 kubelet[2069]: I0913 01:32:16.668733 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bffa82a68a0a6a022dbb4508b111e830-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-a3199d6d1b\" (UID: \"bffa82a68a0a6a022dbb4508b111e830\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:16.669008 kubelet[2069]: I0913 01:32:16.668753 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b58dce9829a7621eb11a0baf7d0004e6-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-a3199d6d1b\" (UID: \"b58dce9829a7621eb11a0baf7d0004e6\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:16.669008 kubelet[2069]: I0913 01:32:16.668778 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bffa82a68a0a6a022dbb4508b111e830-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-a3199d6d1b\" (UID: \"bffa82a68a0a6a022dbb4508b111e830\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:16.669127 kubelet[2069]: I0913 01:32:16.668791 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bffa82a68a0a6a022dbb4508b111e830-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-a3199d6d1b\" (UID: \"bffa82a68a0a6a022dbb4508b111e830\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:16.669127 kubelet[2069]: I0913 01:32:16.668807 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed9b3dd9c5ad224cb56f8678246f5650-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-a3199d6d1b\" (UID: \"ed9b3dd9c5ad224cb56f8678246f5650\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:16.683505 kubelet[2069]: E0913 01:32:16.683409 2069 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.47:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.47:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-a3199d6d1b.1864b38485c9d137 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-a3199d6d1b,UID:ci-3510.3.8-n-a3199d6d1b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-a3199d6d1b,},FirstTimestamp:2025-09-13 01:32:16.342987063 +0000 UTC m=+0.747257401,LastTimestamp:2025-09-13 01:32:16.342987063 +0000 UTC m=+0.747257401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-a3199d6d1b,}" Sep 13 01:32:16.774840 kubelet[2069]: I0913 01:32:16.774794 2069 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:16.775164 kubelet[2069]: E0913 01:32:16.775133 2069 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.47:6443/api/v1/nodes\": dial tcp 10.200.20.47:6443: connect: connection refused" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:16.898016 env[1477]: time="2025-09-13T01:32:16.897902714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-a3199d6d1b,Uid:b58dce9829a7621eb11a0baf7d0004e6,Namespace:kube-system,Attempt:0,}" Sep 13 01:32:16.903767 env[1477]: time="2025-09-13T01:32:16.903727649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-a3199d6d1b,Uid:bffa82a68a0a6a022dbb4508b111e830,Namespace:kube-system,Attempt:0,}" Sep 13 01:32:16.916665 env[1477]: time="2025-09-13T01:32:16.916628392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-a3199d6d1b,Uid:ed9b3dd9c5ad224cb56f8678246f5650,Namespace:kube-system,Attempt:0,}" Sep 13 01:32:16.971398 kubelet[2069]: E0913 01:32:16.971364 2069 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-a3199d6d1b?timeout=10s\": dial tcp 10.200.20.47:6443: connect: connection refused" interval="800ms" Sep 13 01:32:17.177644 kubelet[2069]: I0913 01:32:17.177511 2069 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:17.177871 kubelet[2069]: E0913 01:32:17.177839 2069 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.47:6443/api/v1/nodes\": dial tcp 10.200.20.47:6443: connect: connection refused" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:17.327016 kubelet[2069]: E0913 01:32:17.326971 2069 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 01:32:17.523068 kubelet[2069]: E0913 01:32:17.522801 2069 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-a3199d6d1b&limit=500&resourceVersion=0\": dial tcp 10.200.20.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 01:32:17.593755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4002037996.mount: Deactivated successfully. Sep 13 01:32:17.618524 env[1477]: time="2025-09-13T01:32:17.618482831Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:17.633746 env[1477]: time="2025-09-13T01:32:17.633705566Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:17.637902 env[1477]: time="2025-09-13T01:32:17.637863388Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:17.640925 env[1477]: time="2025-09-13T01:32:17.640892615Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:17.644227 env[1477]: time="2025-09-13T01:32:17.644199561Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:17.653186 env[1477]: time="2025-09-13T01:32:17.653153763Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:17.656473 env[1477]: time="2025-09-13T01:32:17.656429069Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:17.659826 env[1477]: time="2025-09-13T01:32:17.659795455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:17.666375 env[1477]: time="2025-09-13T01:32:17.666341267Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:17.674943 env[1477]: time="2025-09-13T01:32:17.674902230Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:17.682843 env[1477]: time="2025-09-13T01:32:17.682801557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:17.686361 env[1477]: time="2025-09-13T01:32:17.686326262Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:17.717078 env[1477]: time="2025-09-13T01:32:17.717011291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:32:17.717261 env[1477]: time="2025-09-13T01:32:17.717238770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:32:17.717353 env[1477]: time="2025-09-13T01:32:17.717332969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:32:17.717624 env[1477]: time="2025-09-13T01:32:17.717567128Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e967c657201b74a17b29a60b6e4de7646e3cce59578f2d2dc5180504765fa5eb pid=2113 runtime=io.containerd.runc.v2 Sep 13 01:32:17.734566 systemd[1]: Started cri-containerd-e967c657201b74a17b29a60b6e4de7646e3cce59578f2d2dc5180504765fa5eb.scope. Sep 13 01:32:17.772804 kubelet[2069]: E0913 01:32:17.772756 2069 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-a3199d6d1b?timeout=10s\": dial tcp 10.200.20.47:6443: connect: connection refused" interval="1.6s" Sep 13 01:32:17.775491 env[1477]: time="2025-09-13T01:32:17.771565258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:32:17.775777 env[1477]: time="2025-09-13T01:32:17.775729481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:32:17.775914 env[1477]: time="2025-09-13T01:32:17.775891800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:32:17.776168 env[1477]: time="2025-09-13T01:32:17.776140119Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6305198bc33b98bbfbb42ee07197fd908ee3094cb337957fa0900094f0ed25f pid=2151 runtime=io.containerd.runc.v2 Sep 13 01:32:17.778478 env[1477]: time="2025-09-13T01:32:17.778443629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-a3199d6d1b,Uid:b58dce9829a7621eb11a0baf7d0004e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e967c657201b74a17b29a60b6e4de7646e3cce59578f2d2dc5180504765fa5eb\"" Sep 13 01:32:17.782657 env[1477]: time="2025-09-13T01:32:17.782583451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:32:17.791567 env[1477]: time="2025-09-13T01:32:17.782745211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:32:17.791567 env[1477]: time="2025-09-13T01:32:17.782776371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:32:17.791567 env[1477]: time="2025-09-13T01:32:17.783058969Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3cedd0e56af46f7e9478db9c328dc800254c9dfb1a446f18350ae02d53fad17 pid=2169 runtime=io.containerd.runc.v2 Sep 13 01:32:17.788095 systemd[1]: Started cri-containerd-d6305198bc33b98bbfbb42ee07197fd908ee3094cb337957fa0900094f0ed25f.scope. Sep 13 01:32:17.792297 env[1477]: time="2025-09-13T01:32:17.792264530Z" level=info msg="CreateContainer within sandbox \"e967c657201b74a17b29a60b6e4de7646e3cce59578f2d2dc5180504765fa5eb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 01:32:17.805370 systemd[1]: Started cri-containerd-b3cedd0e56af46f7e9478db9c328dc800254c9dfb1a446f18350ae02d53fad17.scope. Sep 13 01:32:17.821398 kubelet[2069]: E0913 01:32:17.821356 2069 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 01:32:17.832253 env[1477]: time="2025-09-13T01:32:17.832216120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-a3199d6d1b,Uid:bffa82a68a0a6a022dbb4508b111e830,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6305198bc33b98bbfbb42ee07197fd908ee3094cb337957fa0900094f0ed25f\"" Sep 13 01:32:17.836273 env[1477]: time="2025-09-13T01:32:17.836239783Z" level=info msg="CreateContainer within sandbox \"e967c657201b74a17b29a60b6e4de7646e3cce59578f2d2dc5180504765fa5eb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b97a24b5bb78e5c45264aa26e72938a2387a20e92a953e8d37a4441142ac414b\"" Sep 13 01:32:17.838834 kubelet[2069]: E0913 01:32:17.838798 2069 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 01:32:17.839156 env[1477]: time="2025-09-13T01:32:17.839135330Z" level=info msg="StartContainer for \"b97a24b5bb78e5c45264aa26e72938a2387a20e92a953e8d37a4441142ac414b\"" Sep 13 01:32:17.845434 env[1477]: time="2025-09-13T01:32:17.845391384Z" level=info msg="CreateContainer within sandbox \"d6305198bc33b98bbfbb42ee07197fd908ee3094cb337957fa0900094f0ed25f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 01:32:17.852623 env[1477]: time="2025-09-13T01:32:17.852551313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-a3199d6d1b,Uid:ed9b3dd9c5ad224cb56f8678246f5650,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3cedd0e56af46f7e9478db9c328dc800254c9dfb1a446f18350ae02d53fad17\"" Sep 13 01:32:17.861948 env[1477]: time="2025-09-13T01:32:17.861916753Z" level=info msg="CreateContainer within sandbox \"b3cedd0e56af46f7e9478db9c328dc800254c9dfb1a446f18350ae02d53fad17\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 01:32:17.863146 systemd[1]: Started cri-containerd-b97a24b5bb78e5c45264aa26e72938a2387a20e92a953e8d37a4441142ac414b.scope. Sep 13 01:32:17.900121 env[1477]: time="2025-09-13T01:32:17.900065871Z" level=info msg="CreateContainer within sandbox \"d6305198bc33b98bbfbb42ee07197fd908ee3094cb337957fa0900094f0ed25f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c3f7109e6789172fc42bb1e80890aa8a314d18792c96f5a3c309b51213a7a61f\"" Sep 13 01:32:17.901006 env[1477]: time="2025-09-13T01:32:17.900977787Z" level=info msg="StartContainer for \"c3f7109e6789172fc42bb1e80890aa8a314d18792c96f5a3c309b51213a7a61f\"" Sep 13 01:32:17.906066 env[1477]: time="2025-09-13T01:32:17.906032005Z" level=info msg="StartContainer for \"b97a24b5bb78e5c45264aa26e72938a2387a20e92a953e8d37a4441142ac414b\" returns successfully" Sep 13 01:32:17.918981 systemd[1]: Started cri-containerd-c3f7109e6789172fc42bb1e80890aa8a314d18792c96f5a3c309b51213a7a61f.scope. Sep 13 01:32:17.922566 env[1477]: time="2025-09-13T01:32:17.922532455Z" level=info msg="CreateContainer within sandbox \"b3cedd0e56af46f7e9478db9c328dc800254c9dfb1a446f18350ae02d53fad17\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ce28dd25b920d35d7781906d005488e2136c93673bfbf471cad3116c9eab1344\"" Sep 13 01:32:17.923415 env[1477]: time="2025-09-13T01:32:17.923390331Z" level=info msg="StartContainer for \"ce28dd25b920d35d7781906d005488e2136c93673bfbf471cad3116c9eab1344\"" Sep 13 01:32:17.964846 systemd[1]: Started cri-containerd-ce28dd25b920d35d7781906d005488e2136c93673bfbf471cad3116c9eab1344.scope. Sep 13 01:32:17.968747 env[1477]: time="2025-09-13T01:32:17.968713018Z" level=info msg="StartContainer for \"c3f7109e6789172fc42bb1e80890aa8a314d18792c96f5a3c309b51213a7a61f\" returns successfully" Sep 13 01:32:17.979746 kubelet[2069]: I0913 01:32:17.979359 2069 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:17.979746 kubelet[2069]: E0913 01:32:17.979710 2069 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.47:6443/api/v1/nodes\": dial tcp 10.200.20.47:6443: connect: connection refused" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:18.046311 env[1477]: time="2025-09-13T01:32:18.046214013Z" level=info msg="StartContainer for \"ce28dd25b920d35d7781906d005488e2136c93673bfbf471cad3116c9eab1344\" returns successfully" Sep 13 01:32:18.477816 kubelet[2069]: E0913 01:32:18.477789 2069 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-a3199d6d1b\" not found" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:18.479907 kubelet[2069]: E0913 01:32:18.479889 2069 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-a3199d6d1b\" not found" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:18.483338 kubelet[2069]: E0913 01:32:18.483322 2069 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-a3199d6d1b\" not found" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:18.589771 systemd[1]: run-containerd-runc-k8s.io-e967c657201b74a17b29a60b6e4de7646e3cce59578f2d2dc5180504765fa5eb-runc.O4ZE7y.mount: Deactivated successfully. Sep 13 01:32:19.484671 kubelet[2069]: E0913 01:32:19.484583 2069 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-a3199d6d1b\" not found" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:19.489447 kubelet[2069]: E0913 01:32:19.486289 2069 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-a3199d6d1b\" not found" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:19.581628 kubelet[2069]: I0913 01:32:19.581599 2069 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:20.034256 kubelet[2069]: I0913 01:32:20.034226 2069 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:20.034649 kubelet[2069]: E0913 01:32:20.034633 2069 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-a3199d6d1b\": node \"ci-3510.3.8-n-a3199d6d1b\" not found" Sep 13 01:32:20.186492 kubelet[2069]: E0913 01:32:20.186460 2069 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Sep 13 01:32:20.187417 kubelet[2069]: E0913 01:32:20.187389 2069 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-a3199d6d1b\" not found" Sep 13 01:32:20.288014 kubelet[2069]: E0913 01:32:20.287902 2069 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-a3199d6d1b\" not found" Sep 13 01:32:20.388950 kubelet[2069]: E0913 01:32:20.388918 2069 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-a3199d6d1b\" not found" Sep 13 01:32:20.489838 kubelet[2069]: E0913 01:32:20.489803 2069 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-a3199d6d1b\" not found" Sep 13 01:32:20.590492 kubelet[2069]: E0913 01:32:20.590389 2069 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-a3199d6d1b\" not found" Sep 13 01:32:20.773987 kubelet[2069]: I0913 01:32:20.773949 2069 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:20.788088 kubelet[2069]: I0913 01:32:20.788060 2069 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 01:32:20.788417 kubelet[2069]: I0913 01:32:20.788398 2069 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:20.806493 kubelet[2069]: I0913 01:32:20.806466 2069 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 01:32:20.806817 kubelet[2069]: I0913 01:32:20.806776 2069 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:20.816749 kubelet[2069]: I0913 01:32:20.816698 2069 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 01:32:21.345499 kubelet[2069]: I0913 01:32:21.345461 2069 apiserver.go:52] "Watching apiserver" Sep 13 01:32:21.367475 kubelet[2069]: I0913 01:32:21.367445 2069 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 01:32:22.903955 systemd[1]: Reloading. Sep 13 01:32:22.999738 /usr/lib/systemd/system-generators/torcx-generator[2374]: time="2025-09-13T01:32:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:32:23.000155 /usr/lib/systemd/system-generators/torcx-generator[2374]: time="2025-09-13T01:32:23Z" level=info msg="torcx already run" Sep 13 01:32:23.057027 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:32:23.057232 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:32:23.072825 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:32:23.170474 kubelet[2069]: I0913 01:32:23.170385 2069 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:32:23.171272 systemd[1]: Stopping kubelet.service... Sep 13 01:32:23.194127 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 01:32:23.194464 systemd[1]: Stopped kubelet.service. Sep 13 01:32:23.194620 systemd[1]: kubelet.service: Consumed 1.046s CPU time. Sep 13 01:32:23.196975 systemd[1]: Starting kubelet.service... Sep 13 01:32:23.292448 systemd[1]: Started kubelet.service. Sep 13 01:32:23.408130 kubelet[2435]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:32:23.408446 kubelet[2435]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 01:32:23.408500 kubelet[2435]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:32:23.408637 kubelet[2435]: I0913 01:32:23.408611 2435 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 01:32:23.416365 kubelet[2435]: I0913 01:32:23.416337 2435 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 01:32:23.416519 kubelet[2435]: I0913 01:32:23.416508 2435 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 01:32:23.416787 kubelet[2435]: I0913 01:32:23.416772 2435 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 01:32:23.418063 kubelet[2435]: I0913 01:32:23.418044 2435 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 01:32:23.421508 kubelet[2435]: I0913 01:32:23.421415 2435 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:32:23.426431 kubelet[2435]: E0913 01:32:23.426397 2435 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 01:32:23.427360 kubelet[2435]: I0913 01:32:23.427342 2435 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 01:32:23.431231 kubelet[2435]: I0913 01:32:23.431211 2435 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 01:32:23.431622 kubelet[2435]: I0913 01:32:23.431583 2435 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 01:32:23.431840 kubelet[2435]: I0913 01:32:23.431696 2435 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-a3199d6d1b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 01:32:23.431969 kubelet[2435]: I0913 01:32:23.431957 2435 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 01:32:23.432027 kubelet[2435]: I0913 01:32:23.432019 2435 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 01:32:23.432121 kubelet[2435]: I0913 01:32:23.432112 2435 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:32:23.432354 kubelet[2435]: I0913 01:32:23.432341 2435 kubelet.go:480] "Attempting to sync node with API server" Sep 13 01:32:23.432453 kubelet[2435]: I0913 01:32:23.432442 2435 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 01:32:23.432528 kubelet[2435]: I0913 01:32:23.432519 2435 kubelet.go:386] "Adding apiserver pod source" Sep 13 01:32:23.432612 kubelet[2435]: I0913 01:32:23.432582 2435 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 01:32:23.438661 kubelet[2435]: I0913 01:32:23.438009 2435 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 01:32:23.438661 kubelet[2435]: I0913 01:32:23.438644 2435 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 01:32:23.440251 kubelet[2435]: I0913 01:32:23.440229 2435 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 01:32:23.440330 kubelet[2435]: I0913 01:32:23.440265 2435 server.go:1289] "Started kubelet" Sep 13 01:32:23.444624 kubelet[2435]: I0913 01:32:23.442266 2435 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 01:32:23.449090 kubelet[2435]: I0913 01:32:23.449011 2435 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 01:32:23.450184 kubelet[2435]: I0913 01:32:23.450168 2435 server.go:317] "Adding debug handlers to kubelet server" Sep 13 01:32:23.452739 kubelet[2435]: I0913 01:32:23.452689 2435 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 01:32:23.453048 kubelet[2435]: I0913 01:32:23.453034 2435 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 01:32:23.453805 kubelet[2435]: I0913 01:32:23.453773 2435 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 01:32:23.454944 kubelet[2435]: I0913 01:32:23.454927 2435 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 01:32:23.455129 kubelet[2435]: E0913 01:32:23.455109 2435 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-a3199d6d1b\" not found" Sep 13 01:32:23.455799 kubelet[2435]: I0913 01:32:23.455784 2435 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 01:32:23.456016 kubelet[2435]: I0913 01:32:23.456006 2435 reconciler.go:26] "Reconciler: start to sync state" Sep 13 01:32:23.464938 kubelet[2435]: I0913 01:32:23.464911 2435 factory.go:223] Registration of the systemd container factory successfully Sep 13 01:32:23.465176 kubelet[2435]: I0913 01:32:23.465155 2435 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 01:32:23.482662 kubelet[2435]: I0913 01:32:23.482619 2435 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 01:32:23.483803 kubelet[2435]: I0913 01:32:23.483783 2435 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 01:32:23.483909 kubelet[2435]: I0913 01:32:23.483899 2435 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 01:32:23.483980 kubelet[2435]: I0913 01:32:23.483969 2435 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 01:32:23.484042 kubelet[2435]: I0913 01:32:23.484034 2435 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 01:32:23.484136 kubelet[2435]: E0913 01:32:23.484119 2435 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 01:32:23.486887 kubelet[2435]: I0913 01:32:23.486557 2435 factory.go:223] Registration of the containerd container factory successfully Sep 13 01:32:23.504896 kubelet[2435]: E0913 01:32:23.504859 2435 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 01:32:23.537160 kubelet[2435]: I0913 01:32:23.537130 2435 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 01:32:23.537160 kubelet[2435]: I0913 01:32:23.537151 2435 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 01:32:23.537160 kubelet[2435]: I0913 01:32:23.537171 2435 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:32:23.537352 kubelet[2435]: I0913 01:32:23.537285 2435 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 01:32:23.537352 kubelet[2435]: I0913 01:32:23.537294 2435 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 01:32:23.537352 kubelet[2435]: I0913 01:32:23.537312 2435 policy_none.go:49] "None policy: Start" Sep 13 01:32:23.537352 kubelet[2435]: I0913 01:32:23.537323 2435 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 01:32:23.537352 kubelet[2435]: I0913 01:32:23.537331 2435 state_mem.go:35] "Initializing new in-memory state store" Sep 13 01:32:23.537460 kubelet[2435]: I0913 01:32:23.537411 2435 state_mem.go:75] "Updated machine memory state" Sep 13 01:32:23.540805 kubelet[2435]: E0913 01:32:23.540784 2435 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 01:32:23.544825 kubelet[2435]: I0913 01:32:23.541460 2435 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 01:32:23.544825 kubelet[2435]: I0913 01:32:23.543716 2435 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 01:32:23.544825 kubelet[2435]: I0913 01:32:23.544190 2435 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 01:32:23.546065 kubelet[2435]: E0913 01:32:23.546044 2435 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 01:32:23.585486 kubelet[2435]: I0913 01:32:23.585450 2435 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:23.588466 kubelet[2435]: I0913 01:32:23.586029 2435 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:23.590139 kubelet[2435]: I0913 01:32:23.589085 2435 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:23.598790 kubelet[2435]: I0913 01:32:23.598764 2435 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 01:32:23.598914 kubelet[2435]: E0913 01:32:23.598813 2435 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-a3199d6d1b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:23.601679 kubelet[2435]: I0913 01:32:23.601654 2435 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 01:32:23.601857 kubelet[2435]: I0913 01:32:23.601829 2435 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 01:32:23.601911 kubelet[2435]: E0913 01:32:23.601869 2435 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-a3199d6d1b\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:23.601970 kubelet[2435]: E0913 01:32:23.601842 2435 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-a3199d6d1b\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:23.646264 kubelet[2435]: I0913 01:32:23.646228 2435 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:23.656425 kubelet[2435]: I0913 01:32:23.656396 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bffa82a68a0a6a022dbb4508b111e830-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-a3199d6d1b\" (UID: \"bffa82a68a0a6a022dbb4508b111e830\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:23.656650 kubelet[2435]: I0913 01:32:23.656618 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bffa82a68a0a6a022dbb4508b111e830-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-a3199d6d1b\" (UID: \"bffa82a68a0a6a022dbb4508b111e830\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:23.656752 kubelet[2435]: I0913 01:32:23.656738 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bffa82a68a0a6a022dbb4508b111e830-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-a3199d6d1b\" (UID: \"bffa82a68a0a6a022dbb4508b111e830\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:23.656865 kubelet[2435]: I0913 01:32:23.656851 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bffa82a68a0a6a022dbb4508b111e830-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-a3199d6d1b\" (UID: \"bffa82a68a0a6a022dbb4508b111e830\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:23.656961 kubelet[2435]: I0913 01:32:23.656946 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed9b3dd9c5ad224cb56f8678246f5650-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-a3199d6d1b\" (UID: \"ed9b3dd9c5ad224cb56f8678246f5650\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:23.657054 kubelet[2435]: I0913 01:32:23.657041 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b58dce9829a7621eb11a0baf7d0004e6-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-a3199d6d1b\" (UID: \"b58dce9829a7621eb11a0baf7d0004e6\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:23.657137 kubelet[2435]: I0913 01:32:23.657125 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b58dce9829a7621eb11a0baf7d0004e6-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-a3199d6d1b\" (UID: \"b58dce9829a7621eb11a0baf7d0004e6\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:23.657228 kubelet[2435]: I0913 01:32:23.657215 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b58dce9829a7621eb11a0baf7d0004e6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-a3199d6d1b\" (UID: \"b58dce9829a7621eb11a0baf7d0004e6\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:23.657332 kubelet[2435]: I0913 01:32:23.657319 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bffa82a68a0a6a022dbb4508b111e830-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-a3199d6d1b\" (UID: \"bffa82a68a0a6a022dbb4508b111e830\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:23.663085 kubelet[2435]: I0913 01:32:23.663047 2435 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:23.663184 kubelet[2435]: I0913 01:32:23.663120 2435 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:23.970324 sudo[2470]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 01:32:23.970998 sudo[2470]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 01:32:24.433839 kubelet[2435]: I0913 01:32:24.433800 2435 apiserver.go:52] "Watching apiserver" Sep 13 01:32:24.456844 kubelet[2435]: I0913 01:32:24.456807 2435 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 01:32:24.462105 sudo[2470]: pam_unix(sudo:session): session closed for user root Sep 13 01:32:24.527722 kubelet[2435]: I0913 01:32:24.527691 2435 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:24.528084 kubelet[2435]: I0913 01:32:24.528063 2435 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:24.528343 kubelet[2435]: I0913 01:32:24.528318 2435 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:24.543748 kubelet[2435]: I0913 01:32:24.543711 2435 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 01:32:24.543903 kubelet[2435]: E0913 01:32:24.543773 2435 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-a3199d6d1b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:24.544024 kubelet[2435]: I0913 01:32:24.543997 2435 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 01:32:24.544064 kubelet[2435]: I0913 01:32:24.544035 2435 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 01:32:24.544064 kubelet[2435]: E0913 01:32:24.544056 2435 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-a3199d6d1b\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:24.544226 kubelet[2435]: E0913 01:32:24.544206 2435 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-a3199d6d1b\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-a3199d6d1b" Sep 13 01:32:24.596397 kubelet[2435]: I0913 01:32:24.596334 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-a3199d6d1b" podStartSLOduration=4.596317751 podStartE2EDuration="4.596317751s" podCreationTimestamp="2025-09-13 01:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:32:24.578762733 +0000 UTC m=+1.278303641" watchObservedRunningTime="2025-09-13 01:32:24.596317751 +0000 UTC m=+1.295858659" Sep 13 01:32:24.611484 kubelet[2435]: I0913 01:32:24.611417 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-a3199d6d1b" podStartSLOduration=4.611401697 podStartE2EDuration="4.611401697s" podCreationTimestamp="2025-09-13 01:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:32:24.596795669 +0000 UTC m=+1.296336577" watchObservedRunningTime="2025-09-13 01:32:24.611401697 +0000 UTC m=+1.310942605" Sep 13 01:32:26.541383 sudo[1762]: pam_unix(sudo:session): session closed for user root Sep 13 01:32:26.612763 sshd[1759]: pam_unix(sshd:session): session closed for user core Sep 13 01:32:26.615668 systemd[1]: sshd@4-10.200.20.47:22-10.200.16.10:54102.service: Deactivated successfully. Sep 13 01:32:26.616392 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 01:32:26.616545 systemd[1]: session-7.scope: Consumed 6.765s CPU time. Sep 13 01:32:26.616977 systemd-logind[1462]: Session 7 logged out. Waiting for processes to exit. Sep 13 01:32:26.617754 systemd-logind[1462]: Removed session 7. Sep 13 01:32:28.390011 kubelet[2435]: I0913 01:32:28.389964 2435 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 01:32:28.390715 env[1477]: time="2025-09-13T01:32:28.390582831Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 01:32:28.391316 kubelet[2435]: I0913 01:32:28.391126 2435 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 01:32:29.089600 kubelet[2435]: I0913 01:32:29.089523 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-a3199d6d1b" podStartSLOduration=9.089497598 podStartE2EDuration="9.089497598s" podCreationTimestamp="2025-09-13 01:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:32:24.611697496 +0000 UTC m=+1.311238404" watchObservedRunningTime="2025-09-13 01:32:29.089497598 +0000 UTC m=+5.789038546" Sep 13 01:32:29.099752 systemd[1]: Created slice kubepods-besteffort-podecdc02db_23b9_4c0d_9f4c_ef159028d323.slice. Sep 13 01:32:29.129310 systemd[1]: Created slice kubepods-burstable-pod0b2ecc18_29ed_409b_bdee_b28f85cc8c6d.slice. Sep 13 01:32:29.185013 kubelet[2435]: I0913 01:32:29.184981 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecdc02db-23b9-4c0d-9f4c-ef159028d323-xtables-lock\") pod \"kube-proxy-8vqht\" (UID: \"ecdc02db-23b9-4c0d-9f4c-ef159028d323\") " pod="kube-system/kube-proxy-8vqht" Sep 13 01:32:29.185283 kubelet[2435]: I0913 01:32:29.185267 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-bpf-maps\") pod \"cilium-7kq54\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " pod="kube-system/cilium-7kq54" Sep 13 01:32:29.185398 kubelet[2435]: I0913 01:32:29.185384 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-cilium-cgroup\") pod \"cilium-7kq54\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " pod="kube-system/cilium-7kq54" Sep 13 01:32:29.185483 kubelet[2435]: I0913 01:32:29.185472 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-cni-path\") pod \"cilium-7kq54\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " pod="kube-system/cilium-7kq54" Sep 13 01:32:29.185577 kubelet[2435]: I0913 01:32:29.185564 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-lib-modules\") pod \"cilium-7kq54\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " pod="kube-system/cilium-7kq54" Sep 13 01:32:29.185722 kubelet[2435]: I0913 01:32:29.185707 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-xtables-lock\") pod \"cilium-7kq54\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " pod="kube-system/cilium-7kq54" Sep 13 01:32:29.185825 kubelet[2435]: I0913 01:32:29.185811 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5stnz\" (UniqueName: \"kubernetes.io/projected/ecdc02db-23b9-4c0d-9f4c-ef159028d323-kube-api-access-5stnz\") pod \"kube-proxy-8vqht\" (UID: \"ecdc02db-23b9-4c0d-9f4c-ef159028d323\") " pod="kube-system/kube-proxy-8vqht" Sep 13 01:32:29.185915 kubelet[2435]: I0913 01:32:29.185904 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-clustermesh-secrets\") pod \"cilium-7kq54\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " pod="kube-system/cilium-7kq54" Sep 13 01:32:29.186003 kubelet[2435]: I0913 01:32:29.185989 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-host-proc-sys-kernel\") pod \"cilium-7kq54\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " pod="kube-system/cilium-7kq54" Sep 13 01:32:29.186082 kubelet[2435]: I0913 01:32:29.186071 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-hubble-tls\") pod \"cilium-7kq54\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " pod="kube-system/cilium-7kq54" Sep 13 01:32:29.186165 kubelet[2435]: I0913 01:32:29.186154 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cv42\" (UniqueName: \"kubernetes.io/projected/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-kube-api-access-8cv42\") pod \"cilium-7kq54\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " pod="kube-system/cilium-7kq54" Sep 13 01:32:29.186254 kubelet[2435]: I0913 01:32:29.186242 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecdc02db-23b9-4c0d-9f4c-ef159028d323-lib-modules\") pod \"kube-proxy-8vqht\" (UID: \"ecdc02db-23b9-4c0d-9f4c-ef159028d323\") " pod="kube-system/kube-proxy-8vqht" Sep 13 01:32:29.186344 kubelet[2435]: I0913 01:32:29.186333 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-etc-cni-netd\") pod \"cilium-7kq54\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " pod="kube-system/cilium-7kq54" Sep 13 01:32:29.186429 kubelet[2435]: I0913 01:32:29.186417 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-host-proc-sys-net\") pod \"cilium-7kq54\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " pod="kube-system/cilium-7kq54" Sep 13 01:32:29.186522 kubelet[2435]: I0913 01:32:29.186509 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ecdc02db-23b9-4c0d-9f4c-ef159028d323-kube-proxy\") pod \"kube-proxy-8vqht\" (UID: \"ecdc02db-23b9-4c0d-9f4c-ef159028d323\") " pod="kube-system/kube-proxy-8vqht" Sep 13 01:32:29.186632 kubelet[2435]: I0913 01:32:29.186619 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-cilium-run\") pod \"cilium-7kq54\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " pod="kube-system/cilium-7kq54" Sep 13 01:32:29.186731 kubelet[2435]: I0913 01:32:29.186720 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-hostproc\") pod \"cilium-7kq54\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " pod="kube-system/cilium-7kq54" Sep 13 01:32:29.186826 kubelet[2435]: I0913 01:32:29.186809 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-cilium-config-path\") pod \"cilium-7kq54\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " pod="kube-system/cilium-7kq54" Sep 13 01:32:29.289467 kubelet[2435]: I0913 01:32:29.289433 2435 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 01:32:29.405849 env[1477]: time="2025-09-13T01:32:29.405341610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8vqht,Uid:ecdc02db-23b9-4c0d-9f4c-ef159028d323,Namespace:kube-system,Attempt:0,}" Sep 13 01:32:29.432738 env[1477]: time="2025-09-13T01:32:29.432441326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7kq54,Uid:0b2ecc18-29ed-409b-bdee-b28f85cc8c6d,Namespace:kube-system,Attempt:0,}" Sep 13 01:32:29.448078 env[1477]: time="2025-09-13T01:32:29.447973197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:32:29.448078 env[1477]: time="2025-09-13T01:32:29.448012477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:32:29.448078 env[1477]: time="2025-09-13T01:32:29.448025397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:32:29.448906 env[1477]: time="2025-09-13T01:32:29.448855914Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/166c1af635aa600d54271106fe84a8aa55dbe70307e28c725bd4ed045551fd3b pid=2520 runtime=io.containerd.runc.v2 Sep 13 01:32:29.462481 systemd[1]: Started cri-containerd-166c1af635aa600d54271106fe84a8aa55dbe70307e28c725bd4ed045551fd3b.scope. Sep 13 01:32:29.480650 env[1477]: time="2025-09-13T01:32:29.480396376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:32:29.480650 env[1477]: time="2025-09-13T01:32:29.480435816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:32:29.480650 env[1477]: time="2025-09-13T01:32:29.480535975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:32:29.481125 env[1477]: time="2025-09-13T01:32:29.480857774Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c pid=2555 runtime=io.containerd.runc.v2 Sep 13 01:32:29.493955 env[1477]: time="2025-09-13T01:32:29.493900734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8vqht,Uid:ecdc02db-23b9-4c0d-9f4c-ef159028d323,Namespace:kube-system,Attempt:0,} returns sandbox id \"166c1af635aa600d54271106fe84a8aa55dbe70307e28c725bd4ed045551fd3b\"" Sep 13 01:32:29.497022 systemd[1]: Started cri-containerd-24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c.scope. Sep 13 01:32:29.510923 env[1477]: time="2025-09-13T01:32:29.510885721Z" level=info msg="CreateContainer within sandbox \"166c1af635aa600d54271106fe84a8aa55dbe70307e28c725bd4ed045551fd3b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 01:32:29.538221 env[1477]: time="2025-09-13T01:32:29.538177075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7kq54,Uid:0b2ecc18-29ed-409b-bdee-b28f85cc8c6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c\"" Sep 13 01:32:29.540344 env[1477]: time="2025-09-13T01:32:29.540315229Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 01:32:29.556346 env[1477]: time="2025-09-13T01:32:29.556315138Z" level=info msg="CreateContainer within sandbox \"166c1af635aa600d54271106fe84a8aa55dbe70307e28c725bd4ed045551fd3b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"734eb4d9fc3c1b2a5da3b13d294dba350830da755b2636f481fb61f178f7f87f\"" Sep 13 01:32:29.557231 env[1477]: time="2025-09-13T01:32:29.557195136Z" level=info msg="StartContainer for \"734eb4d9fc3c1b2a5da3b13d294dba350830da755b2636f481fb61f178f7f87f\"" Sep 13 01:32:29.574254 systemd[1]: Started cri-containerd-734eb4d9fc3c1b2a5da3b13d294dba350830da755b2636f481fb61f178f7f87f.scope. Sep 13 01:32:29.624551 systemd[1]: Created slice kubepods-besteffort-pod3f1e2457_a2cd_4c4e_aa62_9fd6faf2345e.slice. Sep 13 01:32:29.639062 env[1477]: time="2025-09-13T01:32:29.639005240Z" level=info msg="StartContainer for \"734eb4d9fc3c1b2a5da3b13d294dba350830da755b2636f481fb61f178f7f87f\" returns successfully" Sep 13 01:32:29.691215 kubelet[2435]: I0913 01:32:29.691179 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hgzg\" (UniqueName: \"kubernetes.io/projected/3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e-kube-api-access-6hgzg\") pod \"cilium-operator-6c4d7847fc-cxfhd\" (UID: \"3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e\") " pod="kube-system/cilium-operator-6c4d7847fc-cxfhd" Sep 13 01:32:29.691621 kubelet[2435]: I0913 01:32:29.691606 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-cxfhd\" (UID: \"3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e\") " pod="kube-system/cilium-operator-6c4d7847fc-cxfhd" Sep 13 01:32:29.928674 env[1477]: time="2025-09-13T01:32:29.928628815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cxfhd,Uid:3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e,Namespace:kube-system,Attempt:0,}" Sep 13 01:32:29.964028 env[1477]: time="2025-09-13T01:32:29.963571545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:32:29.964028 env[1477]: time="2025-09-13T01:32:29.963714705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:32:29.964028 env[1477]: time="2025-09-13T01:32:29.963741625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:32:29.964298 env[1477]: time="2025-09-13T01:32:29.964255943Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ea92e4f9526468b0e78a9cd1ef3b92dd5c01e368731c318818e333f7cfea666 pid=2681 runtime=io.containerd.runc.v2 Sep 13 01:32:29.975763 systemd[1]: Started cri-containerd-5ea92e4f9526468b0e78a9cd1ef3b92dd5c01e368731c318818e333f7cfea666.scope. Sep 13 01:32:30.010326 env[1477]: time="2025-09-13T01:32:30.010269360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cxfhd,Uid:3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ea92e4f9526468b0e78a9cd1ef3b92dd5c01e368731c318818e333f7cfea666\"" Sep 13 01:32:31.939430 kubelet[2435]: I0913 01:32:31.939372 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8vqht" podStartSLOduration=2.939355907 podStartE2EDuration="2.939355907s" podCreationTimestamp="2025-09-13 01:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:32:30.554792779 +0000 UTC m=+7.254333687" watchObservedRunningTime="2025-09-13 01:32:31.939355907 +0000 UTC m=+8.638896775" Sep 13 01:32:34.826546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2705563453.mount: Deactivated successfully. Sep 13 01:32:37.430728 env[1477]: time="2025-09-13T01:32:37.430675732Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:37.438579 env[1477]: time="2025-09-13T01:32:37.438540872Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:37.443002 env[1477]: time="2025-09-13T01:32:37.442971900Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:37.443236 env[1477]: time="2025-09-13T01:32:37.443207340Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 13 01:32:37.445201 env[1477]: time="2025-09-13T01:32:37.445155135Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 01:32:37.451281 env[1477]: time="2025-09-13T01:32:37.451252959Z" level=info msg="CreateContainer within sandbox \"24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 01:32:37.479314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount617648910.mount: Deactivated successfully. Sep 13 01:32:37.484448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4212832630.mount: Deactivated successfully. Sep 13 01:32:37.501018 env[1477]: time="2025-09-13T01:32:37.500957311Z" level=info msg="CreateContainer within sandbox \"24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c\"" Sep 13 01:32:37.502926 env[1477]: time="2025-09-13T01:32:37.501826708Z" level=info msg="StartContainer for \"245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c\"" Sep 13 01:32:37.515836 systemd[1]: Started cri-containerd-245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c.scope. Sep 13 01:32:37.549166 env[1477]: time="2025-09-13T01:32:37.549116507Z" level=info msg="StartContainer for \"245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c\" returns successfully" Sep 13 01:32:37.549500 systemd[1]: cri-containerd-245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c.scope: Deactivated successfully. Sep 13 01:32:38.476893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c-rootfs.mount: Deactivated successfully. Sep 13 01:32:38.832928 env[1477]: time="2025-09-13T01:32:38.832543449Z" level=info msg="shim disconnected" id=245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c Sep 13 01:32:38.832928 env[1477]: time="2025-09-13T01:32:38.832613449Z" level=warning msg="cleaning up after shim disconnected" id=245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c namespace=k8s.io Sep 13 01:32:38.832928 env[1477]: time="2025-09-13T01:32:38.832622529Z" level=info msg="cleaning up dead shim" Sep 13 01:32:38.841398 env[1477]: time="2025-09-13T01:32:38.841359027Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:32:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2853 runtime=io.containerd.runc.v2\n" Sep 13 01:32:39.569845 env[1477]: time="2025-09-13T01:32:39.569790186Z" level=info msg="CreateContainer within sandbox \"24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 01:32:39.610246 env[1477]: time="2025-09-13T01:32:39.610197006Z" level=info msg="CreateContainer within sandbox \"24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a\"" Sep 13 01:32:39.610779 env[1477]: time="2025-09-13T01:32:39.610753765Z" level=info msg="StartContainer for \"f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a\"" Sep 13 01:32:39.632301 systemd[1]: run-containerd-runc-k8s.io-f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a-runc.JrtOYS.mount: Deactivated successfully. Sep 13 01:32:39.635694 systemd[1]: Started cri-containerd-f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a.scope. Sep 13 01:32:39.669858 env[1477]: time="2025-09-13T01:32:39.669494820Z" level=info msg="StartContainer for \"f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a\" returns successfully" Sep 13 01:32:39.677869 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 01:32:39.678057 systemd[1]: Stopped systemd-sysctl.service. Sep 13 01:32:39.679023 systemd[1]: Stopping systemd-sysctl.service... Sep 13 01:32:39.680613 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:32:39.686743 systemd[1]: cri-containerd-f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a.scope: Deactivated successfully. Sep 13 01:32:39.689726 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:32:39.721211 env[1477]: time="2025-09-13T01:32:39.721093493Z" level=info msg="shim disconnected" id=f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a Sep 13 01:32:39.721522 env[1477]: time="2025-09-13T01:32:39.721502132Z" level=warning msg="cleaning up after shim disconnected" id=f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a namespace=k8s.io Sep 13 01:32:39.721625 env[1477]: time="2025-09-13T01:32:39.721609492Z" level=info msg="cleaning up dead shim" Sep 13 01:32:39.728289 env[1477]: time="2025-09-13T01:32:39.728255556Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:32:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2917 runtime=io.containerd.runc.v2\n" Sep 13 01:32:40.583528 env[1477]: time="2025-09-13T01:32:40.583486524Z" level=info msg="CreateContainer within sandbox \"24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 01:32:40.596091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a-rootfs.mount: Deactivated successfully. Sep 13 01:32:40.632672 env[1477]: time="2025-09-13T01:32:40.632628646Z" level=info msg="CreateContainer within sandbox \"24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a\"" Sep 13 01:32:40.634867 env[1477]: time="2025-09-13T01:32:40.633448404Z" level=info msg="StartContainer for \"534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a\"" Sep 13 01:32:40.683276 systemd[1]: Started cri-containerd-534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a.scope. Sep 13 01:32:40.730650 systemd[1]: cri-containerd-534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a.scope: Deactivated successfully. Sep 13 01:32:40.739826 env[1477]: time="2025-09-13T01:32:40.739778229Z" level=info msg="StartContainer for \"534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a\" returns successfully" Sep 13 01:32:40.768005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a-rootfs.mount: Deactivated successfully. Sep 13 01:32:40.989685 env[1477]: time="2025-09-13T01:32:40.989639828Z" level=info msg="shim disconnected" id=534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a Sep 13 01:32:40.989919 env[1477]: time="2025-09-13T01:32:40.989901707Z" level=warning msg="cleaning up after shim disconnected" id=534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a namespace=k8s.io Sep 13 01:32:40.989997 env[1477]: time="2025-09-13T01:32:40.989983787Z" level=info msg="cleaning up dead shim" Sep 13 01:32:41.009315 env[1477]: time="2025-09-13T01:32:41.009275461Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:32:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2975 runtime=io.containerd.runc.v2\n" Sep 13 01:32:41.184093 env[1477]: time="2025-09-13T01:32:41.184050690Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:41.190256 env[1477]: time="2025-09-13T01:32:41.190215156Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:41.194010 env[1477]: time="2025-09-13T01:32:41.193980947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:32:41.194421 env[1477]: time="2025-09-13T01:32:41.194393066Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 13 01:32:41.203068 env[1477]: time="2025-09-13T01:32:41.203030926Z" level=info msg="CreateContainer within sandbox \"5ea92e4f9526468b0e78a9cd1ef3b92dd5c01e368731c318818e333f7cfea666\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 01:32:41.233463 env[1477]: time="2025-09-13T01:32:41.233418694Z" level=info msg="CreateContainer within sandbox \"5ea92e4f9526468b0e78a9cd1ef3b92dd5c01e368731c318818e333f7cfea666\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24\"" Sep 13 01:32:41.235607 env[1477]: time="2025-09-13T01:32:41.234834491Z" level=info msg="StartContainer for \"6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24\"" Sep 13 01:32:41.249438 systemd[1]: Started cri-containerd-6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24.scope. Sep 13 01:32:41.285818 env[1477]: time="2025-09-13T01:32:41.285773211Z" level=info msg="StartContainer for \"6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24\" returns successfully" Sep 13 01:32:41.579672 env[1477]: time="2025-09-13T01:32:41.579555001Z" level=info msg="CreateContainer within sandbox \"24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 01:32:41.625241 env[1477]: time="2025-09-13T01:32:41.625193413Z" level=info msg="CreateContainer within sandbox \"24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054\"" Sep 13 01:32:41.626289 env[1477]: time="2025-09-13T01:32:41.626260571Z" level=info msg="StartContainer for \"41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054\"" Sep 13 01:32:41.673428 systemd[1]: Started cri-containerd-41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054.scope. Sep 13 01:32:41.675626 kubelet[2435]: I0913 01:32:41.674838 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-cxfhd" podStartSLOduration=1.490872069 podStartE2EDuration="12.674821377s" podCreationTimestamp="2025-09-13 01:32:29 +0000 UTC" firstStartedPulling="2025-09-13 01:32:30.011471876 +0000 UTC m=+6.711012744" lastFinishedPulling="2025-09-13 01:32:41.195421144 +0000 UTC m=+17.894962052" observedRunningTime="2025-09-13 01:32:41.604391662 +0000 UTC m=+18.303932570" watchObservedRunningTime="2025-09-13 01:32:41.674821377 +0000 UTC m=+18.374362525" Sep 13 01:32:41.705454 systemd[1]: cri-containerd-41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054.scope: Deactivated successfully. Sep 13 01:32:41.709040 env[1477]: time="2025-09-13T01:32:41.708996776Z" level=info msg="StartContainer for \"41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054\" returns successfully" Sep 13 01:32:41.836461 env[1477]: time="2025-09-13T01:32:41.836331397Z" level=info msg="shim disconnected" id=41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054 Sep 13 01:32:41.836461 env[1477]: time="2025-09-13T01:32:41.836378597Z" level=warning msg="cleaning up after shim disconnected" id=41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054 namespace=k8s.io Sep 13 01:32:41.836461 env[1477]: time="2025-09-13T01:32:41.836387597Z" level=info msg="cleaning up dead shim" Sep 13 01:32:41.852709 env[1477]: time="2025-09-13T01:32:41.852651479Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:32:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3071 runtime=io.containerd.runc.v2\n" Sep 13 01:32:42.582424 env[1477]: time="2025-09-13T01:32:42.582379634Z" level=info msg="CreateContainer within sandbox \"24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 01:32:42.596504 systemd[1]: run-containerd-runc-k8s.io-41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054-runc.5PCYx3.mount: Deactivated successfully. Sep 13 01:32:42.596631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054-rootfs.mount: Deactivated successfully. Sep 13 01:32:42.624113 env[1477]: time="2025-09-13T01:32:42.624061378Z" level=info msg="CreateContainer within sandbox \"24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a\"" Sep 13 01:32:42.626282 env[1477]: time="2025-09-13T01:32:42.624841577Z" level=info msg="StartContainer for \"3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a\"" Sep 13 01:32:42.642624 systemd[1]: Started cri-containerd-3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a.scope. Sep 13 01:32:42.692015 env[1477]: time="2025-09-13T01:32:42.691959582Z" level=info msg="StartContainer for \"3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a\" returns successfully" Sep 13 01:32:42.791623 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 13 01:32:42.865978 kubelet[2435]: I0913 01:32:42.865727 2435 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 01:32:42.927538 systemd[1]: Created slice kubepods-burstable-pod643bd044_49b3_4d66_b950_38c5a1de75f2.slice. Sep 13 01:32:42.935565 systemd[1]: Created slice kubepods-burstable-pod1ada418f_34e5_4ce5_80d8_4018ded75a87.slice. Sep 13 01:32:42.976987 kubelet[2435]: I0913 01:32:42.976951 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/643bd044-49b3-4d66-b950-38c5a1de75f2-config-volume\") pod \"coredns-674b8bbfcf-l4q7f\" (UID: \"643bd044-49b3-4d66-b950-38c5a1de75f2\") " pod="kube-system/coredns-674b8bbfcf-l4q7f" Sep 13 01:32:42.977185 kubelet[2435]: I0913 01:32:42.977168 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ada418f-34e5-4ce5-80d8-4018ded75a87-config-volume\") pod \"coredns-674b8bbfcf-d8xbx\" (UID: \"1ada418f-34e5-4ce5-80d8-4018ded75a87\") " pod="kube-system/coredns-674b8bbfcf-d8xbx" Sep 13 01:32:42.977289 kubelet[2435]: I0913 01:32:42.977272 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhhld\" (UniqueName: \"kubernetes.io/projected/643bd044-49b3-4d66-b950-38c5a1de75f2-kube-api-access-lhhld\") pod \"coredns-674b8bbfcf-l4q7f\" (UID: \"643bd044-49b3-4d66-b950-38c5a1de75f2\") " pod="kube-system/coredns-674b8bbfcf-l4q7f" Sep 13 01:32:42.977375 kubelet[2435]: I0913 01:32:42.977362 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj74v\" (UniqueName: \"kubernetes.io/projected/1ada418f-34e5-4ce5-80d8-4018ded75a87-kube-api-access-qj74v\") pod \"coredns-674b8bbfcf-d8xbx\" (UID: \"1ada418f-34e5-4ce5-80d8-4018ded75a87\") " pod="kube-system/coredns-674b8bbfcf-d8xbx" Sep 13 01:32:43.233732 env[1477]: time="2025-09-13T01:32:43.233691349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l4q7f,Uid:643bd044-49b3-4d66-b950-38c5a1de75f2,Namespace:kube-system,Attempt:0,}" Sep 13 01:32:43.238786 env[1477]: time="2025-09-13T01:32:43.238601538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d8xbx,Uid:1ada418f-34e5-4ce5-80d8-4018ded75a87,Namespace:kube-system,Attempt:0,}" Sep 13 01:32:43.628531 kubelet[2435]: I0913 01:32:43.628397 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7kq54" podStartSLOduration=6.723703555 podStartE2EDuration="14.628380622s" podCreationTimestamp="2025-09-13 01:32:29 +0000 UTC" firstStartedPulling="2025-09-13 01:32:29.53968239 +0000 UTC m=+6.239223298" lastFinishedPulling="2025-09-13 01:32:37.444359457 +0000 UTC m=+14.143900365" observedRunningTime="2025-09-13 01:32:43.626960785 +0000 UTC m=+20.326501693" watchObservedRunningTime="2025-09-13 01:32:43.628380622 +0000 UTC m=+20.327921530" Sep 13 01:32:43.674631 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 13 01:32:45.330863 systemd-networkd[1640]: cilium_host: Link UP Sep 13 01:32:45.334671 systemd-networkd[1640]: cilium_net: Link UP Sep 13 01:32:45.337208 systemd-networkd[1640]: cilium_net: Gained carrier Sep 13 01:32:45.342514 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 01:32:45.342630 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 01:32:45.342791 systemd-networkd[1640]: cilium_host: Gained carrier Sep 13 01:32:45.345326 systemd-networkd[1640]: cilium_net: Gained IPv6LL Sep 13 01:32:45.402722 systemd-networkd[1640]: cilium_host: Gained IPv6LL Sep 13 01:32:45.597805 systemd-networkd[1640]: cilium_vxlan: Link UP Sep 13 01:32:45.597812 systemd-networkd[1640]: cilium_vxlan: Gained carrier Sep 13 01:32:45.918618 kernel: NET: Registered PF_ALG protocol family Sep 13 01:32:46.913523 systemd-networkd[1640]: lxc_health: Link UP Sep 13 01:32:46.930643 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 01:32:46.931152 systemd-networkd[1640]: lxc_health: Gained carrier Sep 13 01:32:47.108712 systemd-networkd[1640]: cilium_vxlan: Gained IPv6LL Sep 13 01:32:47.307074 systemd-networkd[1640]: lxc005ddeeda7b6: Link UP Sep 13 01:32:47.316622 kernel: eth0: renamed from tmpb018e Sep 13 01:32:47.325849 systemd-networkd[1640]: lxc4588f6c83d43: Link UP Sep 13 01:32:47.337674 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc005ddeeda7b6: link becomes ready Sep 13 01:32:47.337824 systemd-networkd[1640]: lxc005ddeeda7b6: Gained carrier Sep 13 01:32:47.344622 kernel: eth0: renamed from tmp6eb51 Sep 13 01:32:47.351719 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4588f6c83d43: link becomes ready Sep 13 01:32:47.351698 systemd-networkd[1640]: lxc4588f6c83d43: Gained carrier Sep 13 01:32:48.516740 systemd-networkd[1640]: lxc005ddeeda7b6: Gained IPv6LL Sep 13 01:32:48.708715 systemd-networkd[1640]: lxc_health: Gained IPv6LL Sep 13 01:32:49.157709 systemd-networkd[1640]: lxc4588f6c83d43: Gained IPv6LL Sep 13 01:32:50.880855 env[1477]: time="2025-09-13T01:32:50.880778257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:32:50.881210 env[1477]: time="2025-09-13T01:32:50.881184176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:32:50.881294 env[1477]: time="2025-09-13T01:32:50.881274576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:32:50.884653 env[1477]: time="2025-09-13T01:32:50.881714095Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b018e879b04091f7c5c0e6bb964f32087ae91fb59ed23388d59a3c3a395d66ad pid=3621 runtime=io.containerd.runc.v2 Sep 13 01:32:50.904737 systemd[1]: Started cri-containerd-b018e879b04091f7c5c0e6bb964f32087ae91fb59ed23388d59a3c3a395d66ad.scope. Sep 13 01:32:50.906276 systemd[1]: run-containerd-runc-k8s.io-b018e879b04091f7c5c0e6bb964f32087ae91fb59ed23388d59a3c3a395d66ad-runc.BCfyBM.mount: Deactivated successfully. Sep 13 01:32:50.927558 env[1477]: time="2025-09-13T01:32:50.927480046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:32:50.927558 env[1477]: time="2025-09-13T01:32:50.927527086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:32:50.928262 env[1477]: time="2025-09-13T01:32:50.928209205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:32:50.928533 env[1477]: time="2025-09-13T01:32:50.928486844Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6eb51633efbe4fa663a6464b0f3a3b26c8edfce2dac70532f84d35d931f95bea pid=3653 runtime=io.containerd.runc.v2 Sep 13 01:32:50.953209 systemd[1]: Started cri-containerd-6eb51633efbe4fa663a6464b0f3a3b26c8edfce2dac70532f84d35d931f95bea.scope. Sep 13 01:32:50.969846 env[1477]: time="2025-09-13T01:32:50.969799284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l4q7f,Uid:643bd044-49b3-4d66-b950-38c5a1de75f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b018e879b04091f7c5c0e6bb964f32087ae91fb59ed23388d59a3c3a395d66ad\"" Sep 13 01:32:50.981223 env[1477]: time="2025-09-13T01:32:50.981191222Z" level=info msg="CreateContainer within sandbox \"b018e879b04091f7c5c0e6bb964f32087ae91fb59ed23388d59a3c3a395d66ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 01:32:51.002982 env[1477]: time="2025-09-13T01:32:51.002939460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d8xbx,Uid:1ada418f-34e5-4ce5-80d8-4018ded75a87,Namespace:kube-system,Attempt:0,} returns sandbox id \"6eb51633efbe4fa663a6464b0f3a3b26c8edfce2dac70532f84d35d931f95bea\"" Sep 13 01:32:51.012790 env[1477]: time="2025-09-13T01:32:51.012753922Z" level=info msg="CreateContainer within sandbox \"6eb51633efbe4fa663a6464b0f3a3b26c8edfce2dac70532f84d35d931f95bea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 01:32:51.025089 env[1477]: time="2025-09-13T01:32:51.025044539Z" level=info msg="CreateContainer within sandbox \"b018e879b04091f7c5c0e6bb964f32087ae91fb59ed23388d59a3c3a395d66ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b45c641b48a06af6a6f41871bb7fbc0decd6b458e8858aa96fb2e92fd742dc4\"" Sep 13 01:32:51.025632 env[1477]: time="2025-09-13T01:32:51.025582297Z" level=info msg="StartContainer for \"1b45c641b48a06af6a6f41871bb7fbc0decd6b458e8858aa96fb2e92fd742dc4\"" Sep 13 01:32:51.044642 systemd[1]: Started cri-containerd-1b45c641b48a06af6a6f41871bb7fbc0decd6b458e8858aa96fb2e92fd742dc4.scope. Sep 13 01:32:51.070887 env[1477]: time="2025-09-13T01:32:51.070842252Z" level=info msg="CreateContainer within sandbox \"6eb51633efbe4fa663a6464b0f3a3b26c8edfce2dac70532f84d35d931f95bea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c6bcba6f2575d75d59379b92fa8f9c8f1b507cdf1545ce4c05e14af1eda3d401\"" Sep 13 01:32:51.071803 env[1477]: time="2025-09-13T01:32:51.071777930Z" level=info msg="StartContainer for \"c6bcba6f2575d75d59379b92fa8f9c8f1b507cdf1545ce4c05e14af1eda3d401\"" Sep 13 01:32:51.093385 env[1477]: time="2025-09-13T01:32:51.093332849Z" level=info msg="StartContainer for \"1b45c641b48a06af6a6f41871bb7fbc0decd6b458e8858aa96fb2e92fd742dc4\" returns successfully" Sep 13 01:32:51.099112 systemd[1]: Started cri-containerd-c6bcba6f2575d75d59379b92fa8f9c8f1b507cdf1545ce4c05e14af1eda3d401.scope. Sep 13 01:32:51.137248 env[1477]: time="2025-09-13T01:32:51.136596007Z" level=info msg="StartContainer for \"c6bcba6f2575d75d59379b92fa8f9c8f1b507cdf1545ce4c05e14af1eda3d401\" returns successfully" Sep 13 01:32:51.625179 kubelet[2435]: I0913 01:32:51.625132 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-d8xbx" podStartSLOduration=22.625112241 podStartE2EDuration="22.625112241s" podCreationTimestamp="2025-09-13 01:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:32:51.622023607 +0000 UTC m=+28.321564515" watchObservedRunningTime="2025-09-13 01:32:51.625112241 +0000 UTC m=+28.324653149" Sep 13 01:32:51.645127 kubelet[2435]: I0913 01:32:51.645069 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-l4q7f" podStartSLOduration=22.645049004 podStartE2EDuration="22.645049004s" podCreationTimestamp="2025-09-13 01:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:32:51.642460808 +0000 UTC m=+28.342001716" watchObservedRunningTime="2025-09-13 01:32:51.645049004 +0000 UTC m=+28.344589912" Sep 13 01:32:56.599787 kubelet[2435]: I0913 01:32:56.599754 2435 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 01:34:57.801490 systemd[1]: Started sshd@5-10.200.20.47:22-10.200.16.10:57860.service. Sep 13 01:34:58.210773 sshd[3793]: Accepted publickey for core from 10.200.16.10 port 57860 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:34:58.212552 sshd[3793]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:34:58.216970 systemd[1]: Started session-8.scope. Sep 13 01:34:58.218235 systemd-logind[1462]: New session 8 of user core. Sep 13 01:34:58.681236 sshd[3793]: pam_unix(sshd:session): session closed for user core Sep 13 01:34:58.684471 systemd[1]: sshd@5-10.200.20.47:22-10.200.16.10:57860.service: Deactivated successfully. Sep 13 01:34:58.685193 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 01:34:58.685717 systemd-logind[1462]: Session 8 logged out. Waiting for processes to exit. Sep 13 01:34:58.686656 systemd-logind[1462]: Removed session 8. Sep 13 01:35:03.750058 systemd[1]: Started sshd@6-10.200.20.47:22-10.200.16.10:52928.service. Sep 13 01:35:04.162646 sshd[3809]: Accepted publickey for core from 10.200.16.10 port 52928 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:35:04.164303 sshd[3809]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:35:04.168513 systemd[1]: Started session-9.scope. Sep 13 01:35:04.169539 systemd-logind[1462]: New session 9 of user core. Sep 13 01:35:04.551021 sshd[3809]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:04.554108 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 01:35:04.554765 systemd-logind[1462]: Session 9 logged out. Waiting for processes to exit. Sep 13 01:35:04.554927 systemd[1]: sshd@6-10.200.20.47:22-10.200.16.10:52928.service: Deactivated successfully. Sep 13 01:35:04.555943 systemd-logind[1462]: Removed session 9. Sep 13 01:35:09.619999 systemd[1]: Started sshd@7-10.200.20.47:22-10.200.16.10:52936.service. Sep 13 01:35:10.030721 sshd[3822]: Accepted publickey for core from 10.200.16.10 port 52936 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:35:10.032032 sshd[3822]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:35:10.036448 systemd[1]: Started session-10.scope. Sep 13 01:35:10.036780 systemd-logind[1462]: New session 10 of user core. Sep 13 01:35:10.397161 sshd[3822]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:10.400571 systemd[1]: sshd@7-10.200.20.47:22-10.200.16.10:52936.service: Deactivated successfully. Sep 13 01:35:10.401310 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 01:35:10.401780 systemd-logind[1462]: Session 10 logged out. Waiting for processes to exit. Sep 13 01:35:10.405838 systemd-logind[1462]: Removed session 10. Sep 13 01:35:15.466029 systemd[1]: Started sshd@8-10.200.20.47:22-10.200.16.10:51926.service. Sep 13 01:35:15.874221 sshd[3836]: Accepted publickey for core from 10.200.16.10 port 51926 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:35:15.875861 sshd[3836]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:35:15.880179 systemd[1]: Started session-11.scope. Sep 13 01:35:15.880784 systemd-logind[1462]: New session 11 of user core. Sep 13 01:35:16.242150 sshd[3836]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:16.245294 systemd[1]: sshd@8-10.200.20.47:22-10.200.16.10:51926.service: Deactivated successfully. Sep 13 01:35:16.246010 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 01:35:16.246799 systemd-logind[1462]: Session 11 logged out. Waiting for processes to exit. Sep 13 01:35:16.247574 systemd-logind[1462]: Removed session 11. Sep 13 01:35:21.330978 systemd[1]: Started sshd@9-10.200.20.47:22-10.200.16.10:44264.service. Sep 13 01:35:21.741866 sshd[3849]: Accepted publickey for core from 10.200.16.10 port 44264 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:35:21.743525 sshd[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:35:21.747868 systemd[1]: Started session-12.scope. Sep 13 01:35:21.748190 systemd-logind[1462]: New session 12 of user core. Sep 13 01:35:22.116777 sshd[3849]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:22.119553 systemd[1]: sshd@9-10.200.20.47:22-10.200.16.10:44264.service: Deactivated successfully. Sep 13 01:35:22.120318 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 01:35:22.120872 systemd-logind[1462]: Session 12 logged out. Waiting for processes to exit. Sep 13 01:35:22.121838 systemd-logind[1462]: Removed session 12. Sep 13 01:35:22.186463 systemd[1]: Started sshd@10-10.200.20.47:22-10.200.16.10:44268.service. Sep 13 01:35:22.599762 sshd[3862]: Accepted publickey for core from 10.200.16.10 port 44268 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:35:22.601067 sshd[3862]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:35:22.605146 systemd-logind[1462]: New session 13 of user core. Sep 13 01:35:22.605575 systemd[1]: Started session-13.scope. Sep 13 01:35:23.011764 sshd[3862]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:23.014572 systemd[1]: sshd@10-10.200.20.47:22-10.200.16.10:44268.service: Deactivated successfully. Sep 13 01:35:23.015728 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 01:35:23.016566 systemd-logind[1462]: Session 13 logged out. Waiting for processes to exit. Sep 13 01:35:23.017510 systemd-logind[1462]: Removed session 13. Sep 13 01:35:23.081284 systemd[1]: Started sshd@11-10.200.20.47:22-10.200.16.10:44282.service. Sep 13 01:35:23.490573 sshd[3872]: Accepted publickey for core from 10.200.16.10 port 44282 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:35:23.491901 sshd[3872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:35:23.496353 systemd[1]: Started session-14.scope. Sep 13 01:35:23.497528 systemd-logind[1462]: New session 14 of user core. Sep 13 01:35:23.853854 sshd[3872]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:23.856577 systemd-logind[1462]: Session 14 logged out. Waiting for processes to exit. Sep 13 01:35:23.857634 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 01:35:23.858579 systemd-logind[1462]: Removed session 14. Sep 13 01:35:23.859082 systemd[1]: sshd@11-10.200.20.47:22-10.200.16.10:44282.service: Deactivated successfully. Sep 13 01:35:28.926286 systemd[1]: Started sshd@12-10.200.20.47:22-10.200.16.10:44290.service. Sep 13 01:35:29.337961 sshd[3886]: Accepted publickey for core from 10.200.16.10 port 44290 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:35:29.339641 sshd[3886]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:35:29.343969 systemd[1]: Started session-15.scope. Sep 13 01:35:29.344669 systemd-logind[1462]: New session 15 of user core. Sep 13 01:35:29.706220 sshd[3886]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:29.708882 systemd[1]: sshd@12-10.200.20.47:22-10.200.16.10:44290.service: Deactivated successfully. Sep 13 01:35:29.709657 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 01:35:29.710231 systemd-logind[1462]: Session 15 logged out. Waiting for processes to exit. Sep 13 01:35:29.710955 systemd-logind[1462]: Removed session 15. Sep 13 01:35:34.778481 systemd[1]: Started sshd@13-10.200.20.47:22-10.200.16.10:35482.service. Sep 13 01:35:35.188238 sshd[3900]: Accepted publickey for core from 10.200.16.10 port 35482 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:35:35.189854 sshd[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:35:35.193563 systemd-logind[1462]: New session 16 of user core. Sep 13 01:35:35.194046 systemd[1]: Started session-16.scope. Sep 13 01:35:35.561249 sshd[3900]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:35.564022 systemd-logind[1462]: Session 16 logged out. Waiting for processes to exit. Sep 13 01:35:35.564191 systemd[1]: sshd@13-10.200.20.47:22-10.200.16.10:35482.service: Deactivated successfully. Sep 13 01:35:35.564896 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 01:35:35.565630 systemd-logind[1462]: Removed session 16. Sep 13 01:35:35.629857 systemd[1]: Started sshd@14-10.200.20.47:22-10.200.16.10:35492.service. Sep 13 01:35:36.040580 sshd[3911]: Accepted publickey for core from 10.200.16.10 port 35492 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:35:36.042149 sshd[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:35:36.046374 systemd[1]: Started session-17.scope. Sep 13 01:35:36.046993 systemd-logind[1462]: New session 17 of user core. Sep 13 01:35:36.430943 sshd[3911]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:36.433419 systemd[1]: sshd@14-10.200.20.47:22-10.200.16.10:35492.service: Deactivated successfully. Sep 13 01:35:36.434172 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 01:35:36.434748 systemd-logind[1462]: Session 17 logged out. Waiting for processes to exit. Sep 13 01:35:36.435699 systemd-logind[1462]: Removed session 17. Sep 13 01:35:36.499931 systemd[1]: Started sshd@15-10.200.20.47:22-10.200.16.10:35498.service. Sep 13 01:35:36.912147 sshd[3920]: Accepted publickey for core from 10.200.16.10 port 35498 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:35:36.913787 sshd[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:35:36.918087 systemd[1]: Started session-18.scope. Sep 13 01:35:36.918425 systemd-logind[1462]: New session 18 of user core. Sep 13 01:35:37.801789 sshd[3920]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:37.804454 systemd[1]: sshd@15-10.200.20.47:22-10.200.16.10:35498.service: Deactivated successfully. Sep 13 01:35:37.805927 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 01:35:37.807030 systemd-logind[1462]: Session 18 logged out. Waiting for processes to exit. Sep 13 01:35:37.807876 systemd-logind[1462]: Removed session 18. Sep 13 01:35:37.870938 systemd[1]: Started sshd@16-10.200.20.47:22-10.200.16.10:35514.service. Sep 13 01:35:38.280103 sshd[3937]: Accepted publickey for core from 10.200.16.10 port 35514 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:35:38.281706 sshd[3937]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:35:38.285347 systemd-logind[1462]: New session 19 of user core. Sep 13 01:35:38.285837 systemd[1]: Started session-19.scope. Sep 13 01:35:38.779997 sshd[3937]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:38.782398 systemd[1]: sshd@16-10.200.20.47:22-10.200.16.10:35514.service: Deactivated successfully. Sep 13 01:35:38.783124 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 01:35:38.783691 systemd-logind[1462]: Session 19 logged out. Waiting for processes to exit. Sep 13 01:35:38.784540 systemd-logind[1462]: Removed session 19. Sep 13 01:35:38.848774 systemd[1]: Started sshd@17-10.200.20.47:22-10.200.16.10:35518.service. Sep 13 01:35:39.261863 sshd[3946]: Accepted publickey for core from 10.200.16.10 port 35518 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:35:39.263133 sshd[3946]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:35:39.266643 systemd-logind[1462]: New session 20 of user core. Sep 13 01:35:39.267342 systemd[1]: Started session-20.scope. Sep 13 01:35:39.648390 sshd[3946]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:39.651042 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 01:35:39.651696 systemd-logind[1462]: Session 20 logged out. Waiting for processes to exit. Sep 13 01:35:39.651835 systemd[1]: sshd@17-10.200.20.47:22-10.200.16.10:35518.service: Deactivated successfully. Sep 13 01:35:39.652888 systemd-logind[1462]: Removed session 20. Sep 13 01:35:44.716664 systemd[1]: Started sshd@18-10.200.20.47:22-10.200.16.10:54688.service. Sep 13 01:35:45.124422 sshd[3959]: Accepted publickey for core from 10.200.16.10 port 54688 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:35:45.125856 sshd[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:35:45.130314 systemd[1]: Started session-21.scope. Sep 13 01:35:45.130659 systemd-logind[1462]: New session 21 of user core. Sep 13 01:35:45.509984 sshd[3959]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:45.512683 systemd[1]: sshd@18-10.200.20.47:22-10.200.16.10:54688.service: Deactivated successfully. Sep 13 01:35:45.513353 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 01:35:45.513773 systemd-logind[1462]: Session 21 logged out. Waiting for processes to exit. Sep 13 01:35:45.514413 systemd-logind[1462]: Removed session 21. Sep 13 01:35:50.580537 systemd[1]: Started sshd@19-10.200.20.47:22-10.200.16.10:53936.service. Sep 13 01:35:50.992514 sshd[3971]: Accepted publickey for core from 10.200.16.10 port 53936 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:35:50.993853 sshd[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:35:50.998445 systemd[1]: Started session-22.scope. Sep 13 01:35:50.998790 systemd-logind[1462]: New session 22 of user core. Sep 13 01:35:51.353168 sshd[3971]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:51.356007 systemd[1]: sshd@19-10.200.20.47:22-10.200.16.10:53936.service: Deactivated successfully. Sep 13 01:35:51.356768 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 01:35:51.357283 systemd-logind[1462]: Session 22 logged out. Waiting for processes to exit. Sep 13 01:35:51.357990 systemd-logind[1462]: Removed session 22. Sep 13 01:35:56.422440 systemd[1]: Started sshd@20-10.200.20.47:22-10.200.16.10:53938.service. Sep 13 01:35:56.834835 sshd[3983]: Accepted publickey for core from 10.200.16.10 port 53938 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:35:56.836576 sshd[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:35:56.841004 systemd[1]: Started session-23.scope. Sep 13 01:35:56.841638 systemd-logind[1462]: New session 23 of user core. Sep 13 01:35:57.220043 sshd[3983]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:57.223140 systemd[1]: sshd@20-10.200.20.47:22-10.200.16.10:53938.service: Deactivated successfully. Sep 13 01:35:57.223846 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 01:35:57.224885 systemd-logind[1462]: Session 23 logged out. Waiting for processes to exit. Sep 13 01:35:57.225806 systemd-logind[1462]: Removed session 23. Sep 13 01:35:57.288859 systemd[1]: Started sshd@21-10.200.20.47:22-10.200.16.10:53952.service. Sep 13 01:35:57.699143 sshd[3998]: Accepted publickey for core from 10.200.16.10 port 53952 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:35:57.700817 sshd[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:35:57.704879 systemd-logind[1462]: New session 24 of user core. Sep 13 01:35:57.705300 systemd[1]: Started session-24.scope. Sep 13 01:35:59.458730 env[1477]: time="2025-09-13T01:35:59.458675962Z" level=info msg="StopContainer for \"6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24\" with timeout 30 (s)" Sep 13 01:35:59.459150 env[1477]: time="2025-09-13T01:35:59.459107761Z" level=info msg="Stop container \"6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24\" with signal terminated" Sep 13 01:35:59.472211 systemd[1]: cri-containerd-6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24.scope: Deactivated successfully. Sep 13 01:35:59.478732 env[1477]: time="2025-09-13T01:35:59.478671222Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 01:35:59.488447 env[1477]: time="2025-09-13T01:35:59.488416533Z" level=info msg="StopContainer for \"3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a\" with timeout 2 (s)" Sep 13 01:35:59.488869 env[1477]: time="2025-09-13T01:35:59.488834492Z" level=info msg="Stop container \"3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a\" with signal terminated" Sep 13 01:35:59.493064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24-rootfs.mount: Deactivated successfully. Sep 13 01:35:59.498665 systemd-networkd[1640]: lxc_health: Link DOWN Sep 13 01:35:59.498672 systemd-networkd[1640]: lxc_health: Lost carrier Sep 13 01:35:59.521234 systemd[1]: cri-containerd-3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a.scope: Deactivated successfully. Sep 13 01:35:59.521548 systemd[1]: cri-containerd-3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a.scope: Consumed 6.154s CPU time. Sep 13 01:35:59.538240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a-rootfs.mount: Deactivated successfully. Sep 13 01:35:59.549978 env[1477]: time="2025-09-13T01:35:59.549938953Z" level=info msg="shim disconnected" id=6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24 Sep 13 01:35:59.574113 env[1477]: time="2025-09-13T01:35:59.550200113Z" level=warning msg="cleaning up after shim disconnected" id=6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24 namespace=k8s.io Sep 13 01:35:59.574113 env[1477]: time="2025-09-13T01:35:59.550214553Z" level=info msg="cleaning up dead shim" Sep 13 01:35:59.574113 env[1477]: time="2025-09-13T01:35:59.557255066Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:35:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4065 runtime=io.containerd.runc.v2\n" Sep 13 01:35:59.585479 env[1477]: time="2025-09-13T01:35:59.585443639Z" level=info msg="StopContainer for \"6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24\" returns successfully" Sep 13 01:35:59.586849 env[1477]: time="2025-09-13T01:35:59.586826958Z" level=info msg="StopPodSandbox for \"5ea92e4f9526468b0e78a9cd1ef3b92dd5c01e368731c318818e333f7cfea666\"" Sep 13 01:35:59.587090 env[1477]: time="2025-09-13T01:35:59.587068597Z" level=info msg="Container to stop \"6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:35:59.588955 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ea92e4f9526468b0e78a9cd1ef3b92dd5c01e368731c318818e333f7cfea666-shm.mount: Deactivated successfully. Sep 13 01:35:59.590184 env[1477]: time="2025-09-13T01:35:59.590149714Z" level=info msg="shim disconnected" id=3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a Sep 13 01:35:59.590377 env[1477]: time="2025-09-13T01:35:59.590344194Z" level=warning msg="cleaning up after shim disconnected" id=3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a namespace=k8s.io Sep 13 01:35:59.590461 env[1477]: time="2025-09-13T01:35:59.590446954Z" level=info msg="cleaning up dead shim" Sep 13 01:35:59.594875 systemd[1]: cri-containerd-5ea92e4f9526468b0e78a9cd1ef3b92dd5c01e368731c318818e333f7cfea666.scope: Deactivated successfully. Sep 13 01:35:59.604382 env[1477]: time="2025-09-13T01:35:59.604353061Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:35:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4079 runtime=io.containerd.runc.v2\n" Sep 13 01:35:59.610804 env[1477]: time="2025-09-13T01:35:59.610774334Z" level=info msg="StopContainer for \"3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a\" returns successfully" Sep 13 01:35:59.611367 env[1477]: time="2025-09-13T01:35:59.611345614Z" level=info msg="StopPodSandbox for \"24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c\"" Sep 13 01:35:59.611546 env[1477]: time="2025-09-13T01:35:59.611513614Z" level=info msg="Container to stop \"534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:35:59.611655 env[1477]: time="2025-09-13T01:35:59.611636574Z" level=info msg="Container to stop \"245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:35:59.611892 env[1477]: time="2025-09-13T01:35:59.611852413Z" level=info msg="Container to stop \"f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:35:59.611993 env[1477]: time="2025-09-13T01:35:59.611976173Z" level=info msg="Container to stop \"41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:35:59.612076 env[1477]: time="2025-09-13T01:35:59.612060293Z" level=info msg="Container to stop \"3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:35:59.614030 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c-shm.mount: Deactivated successfully. Sep 13 01:35:59.619257 systemd[1]: cri-containerd-24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c.scope: Deactivated successfully. Sep 13 01:35:59.635902 env[1477]: time="2025-09-13T01:35:59.635858150Z" level=info msg="shim disconnected" id=5ea92e4f9526468b0e78a9cd1ef3b92dd5c01e368731c318818e333f7cfea666 Sep 13 01:35:59.636481 env[1477]: time="2025-09-13T01:35:59.636457990Z" level=warning msg="cleaning up after shim disconnected" id=5ea92e4f9526468b0e78a9cd1ef3b92dd5c01e368731c318818e333f7cfea666 namespace=k8s.io Sep 13 01:35:59.636576 env[1477]: time="2025-09-13T01:35:59.636563389Z" level=info msg="cleaning up dead shim" Sep 13 01:35:59.644848 env[1477]: time="2025-09-13T01:35:59.644819901Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:35:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4128 runtime=io.containerd.runc.v2\n" Sep 13 01:35:59.645952 env[1477]: time="2025-09-13T01:35:59.645157901Z" level=info msg="TearDown network for sandbox \"5ea92e4f9526468b0e78a9cd1ef3b92dd5c01e368731c318818e333f7cfea666\" successfully" Sep 13 01:35:59.645952 env[1477]: time="2025-09-13T01:35:59.645178621Z" level=info msg="StopPodSandbox for \"5ea92e4f9526468b0e78a9cd1ef3b92dd5c01e368731c318818e333f7cfea666\" returns successfully" Sep 13 01:35:59.647161 env[1477]: time="2025-09-13T01:35:59.647128219Z" level=info msg="shim disconnected" id=24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c Sep 13 01:35:59.647279 env[1477]: time="2025-09-13T01:35:59.647262899Z" level=warning msg="cleaning up after shim disconnected" id=24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c namespace=k8s.io Sep 13 01:35:59.647358 env[1477]: time="2025-09-13T01:35:59.647345219Z" level=info msg="cleaning up dead shim" Sep 13 01:35:59.671681 env[1477]: time="2025-09-13T01:35:59.671642035Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:35:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4142 runtime=io.containerd.runc.v2\n" Sep 13 01:35:59.672112 env[1477]: time="2025-09-13T01:35:59.672085155Z" level=info msg="TearDown network for sandbox \"24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c\" successfully" Sep 13 01:35:59.672207 env[1477]: time="2025-09-13T01:35:59.672188675Z" level=info msg="StopPodSandbox for \"24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c\" returns successfully" Sep 13 01:35:59.755219 kubelet[2435]: I0913 01:35:59.755099 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-cilium-run\") pod \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " Sep 13 01:35:59.755540 kubelet[2435]: I0913 01:35:59.755243 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e-cilium-config-path\") pod \"3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e\" (UID: \"3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e\") " Sep 13 01:35:59.755540 kubelet[2435]: I0913 01:35:59.755262 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-lib-modules\") pod \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " Sep 13 01:35:59.755540 kubelet[2435]: I0913 01:35:59.755281 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-clustermesh-secrets\") pod \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " Sep 13 01:35:59.755540 kubelet[2435]: I0913 01:35:59.755305 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-hostproc\") pod \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " Sep 13 01:35:59.755540 kubelet[2435]: I0913 01:35:59.755346 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-xtables-lock\") pod \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " Sep 13 01:35:59.755540 kubelet[2435]: I0913 01:35:59.755375 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-hubble-tls\") pod \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " Sep 13 01:35:59.755719 kubelet[2435]: I0913 01:35:59.755391 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-host-proc-sys-net\") pod \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " Sep 13 01:35:59.755719 kubelet[2435]: I0913 01:35:59.755407 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-cilium-config-path\") pod \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " Sep 13 01:35:59.755719 kubelet[2435]: I0913 01:35:59.755423 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-bpf-maps\") pod \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " Sep 13 01:35:59.755719 kubelet[2435]: I0913 01:35:59.755449 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cv42\" (UniqueName: \"kubernetes.io/projected/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-kube-api-access-8cv42\") pod \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " Sep 13 01:35:59.755719 kubelet[2435]: I0913 01:35:59.755467 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-cni-path\") pod \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " Sep 13 01:35:59.755719 kubelet[2435]: I0913 01:35:59.755481 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-host-proc-sys-kernel\") pod \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " Sep 13 01:35:59.755851 kubelet[2435]: I0913 01:35:59.755496 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-cilium-cgroup\") pod \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " Sep 13 01:35:59.755851 kubelet[2435]: I0913 01:35:59.755511 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-etc-cni-netd\") pod \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\" (UID: \"0b2ecc18-29ed-409b-bdee-b28f85cc8c6d\") " Sep 13 01:35:59.755851 kubelet[2435]: I0913 01:35:59.755540 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hgzg\" (UniqueName: \"kubernetes.io/projected/3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e-kube-api-access-6hgzg\") pod \"3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e\" (UID: \"3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e\") " Sep 13 01:35:59.759620 kubelet[2435]: I0913 01:35:59.755195 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d" (UID: "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:35:59.759620 kubelet[2435]: I0913 01:35:59.755964 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d" (UID: "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:35:59.759620 kubelet[2435]: I0913 01:35:59.758356 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d" (UID: "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 01:35:59.759620 kubelet[2435]: I0913 01:35:59.758412 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d" (UID: "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:35:59.759620 kubelet[2435]: I0913 01:35:59.758416 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d" (UID: "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:35:59.759824 kubelet[2435]: I0913 01:35:59.758451 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-hostproc" (OuterVolumeSpecName: "hostproc") pod "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d" (UID: "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:35:59.759824 kubelet[2435]: I0913 01:35:59.758465 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d" (UID: "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:35:59.760734 kubelet[2435]: I0913 01:35:59.760703 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-cni-path" (OuterVolumeSpecName: "cni-path") pod "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d" (UID: "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:35:59.760852 kubelet[2435]: I0913 01:35:59.760837 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d" (UID: "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:35:59.760929 kubelet[2435]: I0913 01:35:59.760916 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d" (UID: "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:35:59.760997 kubelet[2435]: I0913 01:35:59.760986 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d" (UID: "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:35:59.762900 kubelet[2435]: I0913 01:35:59.762864 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d" (UID: "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:35:59.763275 kubelet[2435]: I0913 01:35:59.763241 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e" (UID: "3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 01:35:59.763335 kubelet[2435]: I0913 01:35:59.763312 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e-kube-api-access-6hgzg" (OuterVolumeSpecName: "kube-api-access-6hgzg") pod "3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e" (UID: "3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e"). InnerVolumeSpecName "kube-api-access-6hgzg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:35:59.764717 kubelet[2435]: I0913 01:35:59.764682 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-kube-api-access-8cv42" (OuterVolumeSpecName: "kube-api-access-8cv42") pod "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d" (UID: "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d"). InnerVolumeSpecName "kube-api-access-8cv42". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:35:59.764795 kubelet[2435]: I0913 01:35:59.764759 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d" (UID: "0b2ecc18-29ed-409b-bdee-b28f85cc8c6d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 01:35:59.856317 kubelet[2435]: I0913 01:35:59.856276 2435 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-cni-path\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:35:59.856317 kubelet[2435]: I0913 01:35:59.856310 2435 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:35:59.856317 kubelet[2435]: I0913 01:35:59.856319 2435 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-cilium-cgroup\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:35:59.856519 kubelet[2435]: I0913 01:35:59.856328 2435 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-etc-cni-netd\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:35:59.856519 kubelet[2435]: I0913 01:35:59.856340 2435 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6hgzg\" (UniqueName: \"kubernetes.io/projected/3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e-kube-api-access-6hgzg\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:35:59.856519 kubelet[2435]: I0913 01:35:59.856349 2435 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-cilium-run\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:35:59.856519 kubelet[2435]: I0913 01:35:59.856357 2435 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e-cilium-config-path\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:35:59.856519 kubelet[2435]: I0913 01:35:59.856365 2435 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-lib-modules\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:35:59.856519 kubelet[2435]: I0913 01:35:59.856374 2435 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-clustermesh-secrets\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:35:59.856519 kubelet[2435]: I0913 01:35:59.856382 2435 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-hostproc\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:35:59.856519 kubelet[2435]: I0913 01:35:59.856391 2435 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-xtables-lock\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:35:59.856731 kubelet[2435]: I0913 01:35:59.856400 2435 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-hubble-tls\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:35:59.856731 kubelet[2435]: I0913 01:35:59.856409 2435 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-host-proc-sys-net\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:35:59.856731 kubelet[2435]: I0913 01:35:59.856417 2435 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-cilium-config-path\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:35:59.856731 kubelet[2435]: I0913 01:35:59.856426 2435 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-bpf-maps\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:35:59.856731 kubelet[2435]: I0913 01:35:59.856435 2435 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8cv42\" (UniqueName: \"kubernetes.io/projected/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d-kube-api-access-8cv42\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:35:59.947313 kubelet[2435]: I0913 01:35:59.947280 2435 scope.go:117] "RemoveContainer" containerID="6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24" Sep 13 01:35:59.949303 env[1477]: time="2025-09-13T01:35:59.948882367Z" level=info msg="RemoveContainer for \"6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24\"" Sep 13 01:35:59.953445 systemd[1]: Removed slice kubepods-besteffort-pod3f1e2457_a2cd_4c4e_aa62_9fd6faf2345e.slice. Sep 13 01:35:59.960012 systemd[1]: Removed slice kubepods-burstable-pod0b2ecc18_29ed_409b_bdee_b28f85cc8c6d.slice. Sep 13 01:35:59.960091 systemd[1]: kubepods-burstable-pod0b2ecc18_29ed_409b_bdee_b28f85cc8c6d.slice: Consumed 6.240s CPU time. Sep 13 01:35:59.964145 env[1477]: time="2025-09-13T01:35:59.963871312Z" level=info msg="RemoveContainer for \"6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24\" returns successfully" Sep 13 01:35:59.965271 kubelet[2435]: I0913 01:35:59.965169 2435 scope.go:117] "RemoveContainer" containerID="6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24" Sep 13 01:35:59.965863 env[1477]: time="2025-09-13T01:35:59.965726911Z" level=error msg="ContainerStatus for \"6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24\": not found" Sep 13 01:35:59.966149 kubelet[2435]: E0913 01:35:59.966125 2435 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24\": not found" containerID="6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24" Sep 13 01:35:59.966324 kubelet[2435]: I0913 01:35:59.966176 2435 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24"} err="failed to get container status \"6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d93bad81e305751f596601e5aba32b142521b753e6f9b67a3820f9dedb57d24\": not found" Sep 13 01:35:59.966324 kubelet[2435]: I0913 01:35:59.966233 2435 scope.go:117] "RemoveContainer" containerID="3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a" Sep 13 01:35:59.967247 env[1477]: time="2025-09-13T01:35:59.967224109Z" level=info msg="RemoveContainer for \"3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a\"" Sep 13 01:35:59.977948 env[1477]: time="2025-09-13T01:35:59.977921259Z" level=info msg="RemoveContainer for \"3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a\" returns successfully" Sep 13 01:35:59.978206 kubelet[2435]: I0913 01:35:59.978190 2435 scope.go:117] "RemoveContainer" containerID="41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054" Sep 13 01:35:59.979296 env[1477]: time="2025-09-13T01:35:59.979272058Z" level=info msg="RemoveContainer for \"41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054\"" Sep 13 01:35:59.989264 env[1477]: time="2025-09-13T01:35:59.989230688Z" level=info msg="RemoveContainer for \"41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054\" returns successfully" Sep 13 01:35:59.989568 kubelet[2435]: I0913 01:35:59.989550 2435 scope.go:117] "RemoveContainer" containerID="534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a" Sep 13 01:35:59.990765 env[1477]: time="2025-09-13T01:35:59.990733006Z" level=info msg="RemoveContainer for \"534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a\"" Sep 13 01:35:59.999223 env[1477]: time="2025-09-13T01:35:59.999184678Z" level=info msg="RemoveContainer for \"534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a\" returns successfully" Sep 13 01:35:59.999388 kubelet[2435]: I0913 01:35:59.999363 2435 scope.go:117] "RemoveContainer" containerID="f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a" Sep 13 01:36:00.000437 env[1477]: time="2025-09-13T01:36:00.000406037Z" level=info msg="RemoveContainer for \"f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a\"" Sep 13 01:36:00.010795 env[1477]: time="2025-09-13T01:36:00.009860028Z" level=info msg="RemoveContainer for \"f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a\" returns successfully" Sep 13 01:36:00.011042 kubelet[2435]: I0913 01:36:00.010028 2435 scope.go:117] "RemoveContainer" containerID="245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c" Sep 13 01:36:00.011863 env[1477]: time="2025-09-13T01:36:00.011811906Z" level=info msg="RemoveContainer for \"245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c\"" Sep 13 01:36:00.019397 env[1477]: time="2025-09-13T01:36:00.019359939Z" level=info msg="RemoveContainer for \"245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c\" returns successfully" Sep 13 01:36:00.019563 kubelet[2435]: I0913 01:36:00.019538 2435 scope.go:117] "RemoveContainer" containerID="3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a" Sep 13 01:36:00.019800 env[1477]: time="2025-09-13T01:36:00.019740738Z" level=error msg="ContainerStatus for \"3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a\": not found" Sep 13 01:36:00.019971 kubelet[2435]: E0913 01:36:00.019950 2435 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a\": not found" containerID="3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a" Sep 13 01:36:00.020080 kubelet[2435]: I0913 01:36:00.020059 2435 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a"} err="failed to get container status \"3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a\": rpc error: code = NotFound desc = an error occurred when try to find container \"3f8dea36ffa3c131d8d3139b1858d5270eb454cf830fcb4813c28efdefa3fa9a\": not found" Sep 13 01:36:00.020155 kubelet[2435]: I0913 01:36:00.020144 2435 scope.go:117] "RemoveContainer" containerID="41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054" Sep 13 01:36:00.020400 env[1477]: time="2025-09-13T01:36:00.020353618Z" level=error msg="ContainerStatus for \"41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054\": not found" Sep 13 01:36:00.020574 kubelet[2435]: E0913 01:36:00.020553 2435 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054\": not found" containerID="41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054" Sep 13 01:36:00.020637 kubelet[2435]: I0913 01:36:00.020578 2435 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054"} err="failed to get container status \"41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054\": rpc error: code = NotFound desc = an error occurred when try to find container \"41cd9c2afd7a40cf77c30853a86080b67d342de2c1b704827abcad0ccda9c054\": not found" Sep 13 01:36:00.020637 kubelet[2435]: I0913 01:36:00.020612 2435 scope.go:117] "RemoveContainer" containerID="534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a" Sep 13 01:36:00.020812 env[1477]: time="2025-09-13T01:36:00.020765217Z" level=error msg="ContainerStatus for \"534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a\": not found" Sep 13 01:36:00.020922 kubelet[2435]: E0913 01:36:00.020901 2435 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a\": not found" containerID="534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a" Sep 13 01:36:00.020968 kubelet[2435]: I0913 01:36:00.020926 2435 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a"} err="failed to get container status \"534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a\": rpc error: code = NotFound desc = an error occurred when try to find container \"534a3f3b6a106ab8c632e5c9df1b86e434170737a55ce18a0267e75b19da614a\": not found" Sep 13 01:36:00.020968 kubelet[2435]: I0913 01:36:00.020942 2435 scope.go:117] "RemoveContainer" containerID="f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a" Sep 13 01:36:00.021116 env[1477]: time="2025-09-13T01:36:00.021073177Z" level=error msg="ContainerStatus for \"f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a\": not found" Sep 13 01:36:00.021237 kubelet[2435]: E0913 01:36:00.021212 2435 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a\": not found" containerID="f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a" Sep 13 01:36:00.021281 kubelet[2435]: I0913 01:36:00.021242 2435 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a"} err="failed to get container status \"f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9801f31c24b91019804e4543d0831cc0793ec731d6bcfa148a16eca99d87a2a\": not found" Sep 13 01:36:00.021281 kubelet[2435]: I0913 01:36:00.021256 2435 scope.go:117] "RemoveContainer" containerID="245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c" Sep 13 01:36:00.021433 env[1477]: time="2025-09-13T01:36:00.021388417Z" level=error msg="ContainerStatus for \"245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c\": not found" Sep 13 01:36:00.021540 kubelet[2435]: E0913 01:36:00.021518 2435 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c\": not found" containerID="245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c" Sep 13 01:36:00.021585 kubelet[2435]: I0913 01:36:00.021544 2435 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c"} err="failed to get container status \"245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c\": rpc error: code = NotFound desc = an error occurred when try to find container \"245b2c65e1330c7a852f1ce74c094a3ea0623462708fc6c3a61be336b4bb267c\": not found" Sep 13 01:36:00.449209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ea92e4f9526468b0e78a9cd1ef3b92dd5c01e368731c318818e333f7cfea666-rootfs.mount: Deactivated successfully. Sep 13 01:36:00.449300 systemd[1]: var-lib-kubelet-pods-3f1e2457\x2da2cd\x2d4c4e\x2daa62\x2d9fd6faf2345e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6hgzg.mount: Deactivated successfully. Sep 13 01:36:00.449358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24f1cd87a0bc204df22b28ce25e1649e75755ea99db597336bada2c5731fc27c-rootfs.mount: Deactivated successfully. Sep 13 01:36:00.449404 systemd[1]: var-lib-kubelet-pods-0b2ecc18\x2d29ed\x2d409b\x2dbdee\x2db28f85cc8c6d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8cv42.mount: Deactivated successfully. Sep 13 01:36:00.449456 systemd[1]: var-lib-kubelet-pods-0b2ecc18\x2d29ed\x2d409b\x2dbdee\x2db28f85cc8c6d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 01:36:00.449505 systemd[1]: var-lib-kubelet-pods-0b2ecc18\x2d29ed\x2d409b\x2dbdee\x2db28f85cc8c6d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 01:36:01.479169 sshd[3998]: pam_unix(sshd:session): session closed for user core Sep 13 01:36:01.482027 systemd[1]: sshd@21-10.200.20.47:22-10.200.16.10:53952.service: Deactivated successfully. Sep 13 01:36:01.483169 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 01:36:01.483987 systemd-logind[1462]: Session 24 logged out. Waiting for processes to exit. Sep 13 01:36:01.484890 systemd-logind[1462]: Removed session 24. Sep 13 01:36:01.487539 kubelet[2435]: I0913 01:36:01.487502 2435 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b2ecc18-29ed-409b-bdee-b28f85cc8c6d" path="/var/lib/kubelet/pods/0b2ecc18-29ed-409b-bdee-b28f85cc8c6d/volumes" Sep 13 01:36:01.488105 kubelet[2435]: I0913 01:36:01.488082 2435 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e" path="/var/lib/kubelet/pods/3f1e2457-a2cd-4c4e-aa62-9fd6faf2345e/volumes" Sep 13 01:36:01.549306 systemd[1]: Started sshd@22-10.200.20.47:22-10.200.16.10:57436.service. Sep 13 01:36:01.958582 sshd[4164]: Accepted publickey for core from 10.200.16.10 port 57436 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:36:01.960262 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:36:01.964543 systemd[1]: Started session-25.scope. Sep 13 01:36:01.964898 systemd-logind[1462]: New session 25 of user core. Sep 13 01:36:03.250226 systemd[1]: Created slice kubepods-burstable-pod3c95e415_bb2d_4c60_bf45_acff8a30d27b.slice. Sep 13 01:36:03.276841 sshd[4164]: pam_unix(sshd:session): session closed for user core Sep 13 01:36:03.279733 systemd[1]: sshd@22-10.200.20.47:22-10.200.16.10:57436.service: Deactivated successfully. Sep 13 01:36:03.280501 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 01:36:03.281132 systemd-logind[1462]: Session 25 logged out. Waiting for processes to exit. Sep 13 01:36:03.281960 systemd-logind[1462]: Removed session 25. Sep 13 01:36:03.344805 systemd[1]: Started sshd@23-10.200.20.47:22-10.200.16.10:57446.service. Sep 13 01:36:03.377713 kubelet[2435]: I0913 01:36:03.377680 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c95e415-bb2d-4c60-bf45-acff8a30d27b-clustermesh-secrets\") pod \"cilium-9g967\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " pod="kube-system/cilium-9g967" Sep 13 01:36:03.378096 kubelet[2435]: I0913 01:36:03.378078 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-etc-cni-netd\") pod \"cilium-9g967\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " pod="kube-system/cilium-9g967" Sep 13 01:36:03.378205 kubelet[2435]: I0913 01:36:03.378192 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5trr4\" (UniqueName: \"kubernetes.io/projected/3c95e415-bb2d-4c60-bf45-acff8a30d27b-kube-api-access-5trr4\") pod \"cilium-9g967\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " pod="kube-system/cilium-9g967" Sep 13 01:36:03.378325 kubelet[2435]: I0913 01:36:03.378286 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cilium-run\") pod \"cilium-9g967\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " pod="kube-system/cilium-9g967" Sep 13 01:36:03.378422 kubelet[2435]: I0913 01:36:03.378408 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cilium-cgroup\") pod \"cilium-9g967\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " pod="kube-system/cilium-9g967" Sep 13 01:36:03.378520 kubelet[2435]: I0913 01:36:03.378507 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cilium-config-path\") pod \"cilium-9g967\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " pod="kube-system/cilium-9g967" Sep 13 01:36:03.378629 kubelet[2435]: I0913 01:36:03.378615 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cilium-ipsec-secrets\") pod \"cilium-9g967\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " pod="kube-system/cilium-9g967" Sep 13 01:36:03.378725 kubelet[2435]: I0913 01:36:03.378713 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-hostproc\") pod \"cilium-9g967\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " pod="kube-system/cilium-9g967" Sep 13 01:36:03.378822 kubelet[2435]: I0913 01:36:03.378806 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-bpf-maps\") pod \"cilium-9g967\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " pod="kube-system/cilium-9g967" Sep 13 01:36:03.378914 kubelet[2435]: I0913 01:36:03.378901 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c95e415-bb2d-4c60-bf45-acff8a30d27b-hubble-tls\") pod \"cilium-9g967\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " pod="kube-system/cilium-9g967" Sep 13 01:36:03.379013 kubelet[2435]: I0913 01:36:03.378997 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cni-path\") pod \"cilium-9g967\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " pod="kube-system/cilium-9g967" Sep 13 01:36:03.379106 kubelet[2435]: I0913 01:36:03.379093 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-lib-modules\") pod \"cilium-9g967\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " pod="kube-system/cilium-9g967" Sep 13 01:36:03.379198 kubelet[2435]: I0913 01:36:03.379185 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-xtables-lock\") pod \"cilium-9g967\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " pod="kube-system/cilium-9g967" Sep 13 01:36:03.379288 kubelet[2435]: I0913 01:36:03.379275 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-host-proc-sys-net\") pod \"cilium-9g967\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " pod="kube-system/cilium-9g967" Sep 13 01:36:03.379377 kubelet[2435]: I0913 01:36:03.379364 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-host-proc-sys-kernel\") pod \"cilium-9g967\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " pod="kube-system/cilium-9g967" Sep 13 01:36:03.553171 env[1477]: time="2025-09-13T01:36:03.553052298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9g967,Uid:3c95e415-bb2d-4c60-bf45-acff8a30d27b,Namespace:kube-system,Attempt:0,}" Sep 13 01:36:03.590616 kubelet[2435]: E0913 01:36:03.589993 2435 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 01:36:03.595734 env[1477]: time="2025-09-13T01:36:03.595664658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:36:03.595917 env[1477]: time="2025-09-13T01:36:03.595895337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:36:03.596017 env[1477]: time="2025-09-13T01:36:03.595997657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:36:03.596289 env[1477]: time="2025-09-13T01:36:03.596259857Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa59df926639780f98424dce57906543bad34580ad9d919a3d71c84f6dc2bf8e pid=4188 runtime=io.containerd.runc.v2 Sep 13 01:36:03.606770 systemd[1]: Started cri-containerd-fa59df926639780f98424dce57906543bad34580ad9d919a3d71c84f6dc2bf8e.scope. Sep 13 01:36:03.630292 env[1477]: time="2025-09-13T01:36:03.630249984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9g967,Uid:3c95e415-bb2d-4c60-bf45-acff8a30d27b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa59df926639780f98424dce57906543bad34580ad9d919a3d71c84f6dc2bf8e\"" Sep 13 01:36:03.642695 env[1477]: time="2025-09-13T01:36:03.642647493Z" level=info msg="CreateContainer within sandbox \"fa59df926639780f98424dce57906543bad34580ad9d919a3d71c84f6dc2bf8e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 01:36:03.681669 env[1477]: time="2025-09-13T01:36:03.681623615Z" level=info msg="CreateContainer within sandbox \"fa59df926639780f98424dce57906543bad34580ad9d919a3d71c84f6dc2bf8e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"445d9bde5a2ea6328fc6f51ac0080504d7df1611be0878d2a95d639a1024d761\"" Sep 13 01:36:03.683867 env[1477]: time="2025-09-13T01:36:03.683415613Z" level=info msg="StartContainer for \"445d9bde5a2ea6328fc6f51ac0080504d7df1611be0878d2a95d639a1024d761\"" Sep 13 01:36:03.699358 systemd[1]: Started cri-containerd-445d9bde5a2ea6328fc6f51ac0080504d7df1611be0878d2a95d639a1024d761.scope. Sep 13 01:36:03.711965 systemd[1]: cri-containerd-445d9bde5a2ea6328fc6f51ac0080504d7df1611be0878d2a95d639a1024d761.scope: Deactivated successfully. Sep 13 01:36:03.759801 sshd[4174]: Accepted publickey for core from 10.200.16.10 port 57446 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:36:03.774388 sshd[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:36:03.779329 systemd[1]: Started session-26.scope. Sep 13 01:36:03.779809 systemd-logind[1462]: New session 26 of user core. Sep 13 01:36:03.786215 env[1477]: time="2025-09-13T01:36:03.786158155Z" level=info msg="shim disconnected" id=445d9bde5a2ea6328fc6f51ac0080504d7df1611be0878d2a95d639a1024d761 Sep 13 01:36:03.786404 env[1477]: time="2025-09-13T01:36:03.786384155Z" level=warning msg="cleaning up after shim disconnected" id=445d9bde5a2ea6328fc6f51ac0080504d7df1611be0878d2a95d639a1024d761 namespace=k8s.io Sep 13 01:36:03.786468 env[1477]: time="2025-09-13T01:36:03.786455195Z" level=info msg="cleaning up dead shim" Sep 13 01:36:03.793566 env[1477]: time="2025-09-13T01:36:03.793526828Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:36:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4251 runtime=io.containerd.runc.v2\ntime=\"2025-09-13T01:36:03Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/445d9bde5a2ea6328fc6f51ac0080504d7df1611be0878d2a95d639a1024d761/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 13 01:36:03.794031 env[1477]: time="2025-09-13T01:36:03.793929668Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Sep 13 01:36:03.795055 env[1477]: time="2025-09-13T01:36:03.794168347Z" level=error msg="Failed to pipe stdout of container \"445d9bde5a2ea6328fc6f51ac0080504d7df1611be0878d2a95d639a1024d761\"" error="reading from a closed fifo" Sep 13 01:36:03.795121 env[1477]: time="2025-09-13T01:36:03.795005267Z" level=error msg="Failed to pipe stderr of container \"445d9bde5a2ea6328fc6f51ac0080504d7df1611be0878d2a95d639a1024d761\"" error="reading from a closed fifo" Sep 13 01:36:03.800062 env[1477]: time="2025-09-13T01:36:03.800011302Z" level=error msg="StartContainer for \"445d9bde5a2ea6328fc6f51ac0080504d7df1611be0878d2a95d639a1024d761\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 13 01:36:03.800802 kubelet[2435]: E0913 01:36:03.800396 2435 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="445d9bde5a2ea6328fc6f51ac0080504d7df1611be0878d2a95d639a1024d761" Sep 13 01:36:03.802285 kubelet[2435]: E0913 01:36:03.801579 2435 kuberuntime_manager.go:1358] "Unhandled Error" err=< Sep 13 01:36:03.802285 kubelet[2435]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 13 01:36:03.802285 kubelet[2435]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 13 01:36:03.802285 kubelet[2435]: rm /hostbin/cilium-mount Sep 13 01:36:03.802528 kubelet[2435]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5trr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-9g967_kube-system(3c95e415-bb2d-4c60-bf45-acff8a30d27b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 13 01:36:03.802528 kubelet[2435]: > logger="UnhandledError" Sep 13 01:36:03.803510 kubelet[2435]: E0913 01:36:03.803388 2435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-9g967" podUID="3c95e415-bb2d-4c60-bf45-acff8a30d27b" Sep 13 01:36:03.965438 env[1477]: time="2025-09-13T01:36:03.965388263Z" level=info msg="CreateContainer within sandbox \"fa59df926639780f98424dce57906543bad34580ad9d919a3d71c84f6dc2bf8e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Sep 13 01:36:03.995320 env[1477]: time="2025-09-13T01:36:03.995248475Z" level=info msg="CreateContainer within sandbox \"fa59df926639780f98424dce57906543bad34580ad9d919a3d71c84f6dc2bf8e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"6e09cbe460cb482b99acc0e4edff106875dbfb247a205b6c82e24d6e090931e3\"" Sep 13 01:36:03.997354 env[1477]: time="2025-09-13T01:36:03.997213673Z" level=info msg="StartContainer for \"6e09cbe460cb482b99acc0e4edff106875dbfb247a205b6c82e24d6e090931e3\"" Sep 13 01:36:04.017140 systemd[1]: Started cri-containerd-6e09cbe460cb482b99acc0e4edff106875dbfb247a205b6c82e24d6e090931e3.scope. Sep 13 01:36:04.032503 systemd[1]: cri-containerd-6e09cbe460cb482b99acc0e4edff106875dbfb247a205b6c82e24d6e090931e3.scope: Deactivated successfully. Sep 13 01:36:04.054731 env[1477]: time="2025-09-13T01:36:04.054580818Z" level=info msg="shim disconnected" id=6e09cbe460cb482b99acc0e4edff106875dbfb247a205b6c82e24d6e090931e3 Sep 13 01:36:04.054731 env[1477]: time="2025-09-13T01:36:04.054653938Z" level=warning msg="cleaning up after shim disconnected" id=6e09cbe460cb482b99acc0e4edff106875dbfb247a205b6c82e24d6e090931e3 namespace=k8s.io Sep 13 01:36:04.054731 env[1477]: time="2025-09-13T01:36:04.054666298Z" level=info msg="cleaning up dead shim" Sep 13 01:36:04.068329 env[1477]: time="2025-09-13T01:36:04.068275685Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:36:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4296 runtime=io.containerd.runc.v2\ntime=\"2025-09-13T01:36:04Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6e09cbe460cb482b99acc0e4edff106875dbfb247a205b6c82e24d6e090931e3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 13 01:36:04.068603 env[1477]: time="2025-09-13T01:36:04.068536965Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Sep 13 01:36:04.068827 env[1477]: time="2025-09-13T01:36:04.068790285Z" level=error msg="Failed to pipe stdout of container \"6e09cbe460cb482b99acc0e4edff106875dbfb247a205b6c82e24d6e090931e3\"" error="reading from a closed fifo" Sep 13 01:36:04.074872 env[1477]: time="2025-09-13T01:36:04.074815679Z" level=error msg="Failed to pipe stderr of container \"6e09cbe460cb482b99acc0e4edff106875dbfb247a205b6c82e24d6e090931e3\"" error="reading from a closed fifo" Sep 13 01:36:04.080610 env[1477]: time="2025-09-13T01:36:04.080549233Z" level=error msg="StartContainer for \"6e09cbe460cb482b99acc0e4edff106875dbfb247a205b6c82e24d6e090931e3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 13 01:36:04.080951 kubelet[2435]: E0913 01:36:04.080908 2435 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6e09cbe460cb482b99acc0e4edff106875dbfb247a205b6c82e24d6e090931e3" Sep 13 01:36:04.081423 kubelet[2435]: E0913 01:36:04.081380 2435 kuberuntime_manager.go:1358] "Unhandled Error" err=< Sep 13 01:36:04.081423 kubelet[2435]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 13 01:36:04.081423 kubelet[2435]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 13 01:36:04.081423 kubelet[2435]: rm /hostbin/cilium-mount Sep 13 01:36:04.081423 kubelet[2435]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5trr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-9g967_kube-system(3c95e415-bb2d-4c60-bf45-acff8a30d27b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 13 01:36:04.081423 kubelet[2435]: > logger="UnhandledError" Sep 13 01:36:04.082860 kubelet[2435]: E0913 01:36:04.082806 2435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-9g967" podUID="3c95e415-bb2d-4c60-bf45-acff8a30d27b" Sep 13 01:36:04.182566 sshd[4174]: pam_unix(sshd:session): session closed for user core Sep 13 01:36:04.185436 systemd[1]: sshd@23-10.200.20.47:22-10.200.16.10:57446.service: Deactivated successfully. Sep 13 01:36:04.186134 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 01:36:04.187157 systemd-logind[1462]: Session 26 logged out. Waiting for processes to exit. Sep 13 01:36:04.187963 systemd-logind[1462]: Removed session 26. Sep 13 01:36:04.251545 systemd[1]: Started sshd@24-10.200.20.47:22-10.200.16.10:57458.service. Sep 13 01:36:04.660831 sshd[4310]: Accepted publickey for core from 10.200.16.10 port 57458 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:36:04.662445 sshd[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:36:04.666757 systemd[1]: Started session-27.scope. Sep 13 01:36:04.667189 systemd-logind[1462]: New session 27 of user core. Sep 13 01:36:04.966428 kubelet[2435]: I0913 01:36:04.966390 2435 scope.go:117] "RemoveContainer" containerID="445d9bde5a2ea6328fc6f51ac0080504d7df1611be0878d2a95d639a1024d761" Sep 13 01:36:04.967691 env[1477]: time="2025-09-13T01:36:04.967028426Z" level=info msg="StopPodSandbox for \"fa59df926639780f98424dce57906543bad34580ad9d919a3d71c84f6dc2bf8e\"" Sep 13 01:36:04.967691 env[1477]: time="2025-09-13T01:36:04.967093186Z" level=info msg="Container to stop \"6e09cbe460cb482b99acc0e4edff106875dbfb247a205b6c82e24d6e090931e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:36:04.967691 env[1477]: time="2025-09-13T01:36:04.967108586Z" level=info msg="Container to stop \"445d9bde5a2ea6328fc6f51ac0080504d7df1611be0878d2a95d639a1024d761\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:36:04.969106 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa59df926639780f98424dce57906543bad34580ad9d919a3d71c84f6dc2bf8e-shm.mount: Deactivated successfully. Sep 13 01:36:04.976241 env[1477]: time="2025-09-13T01:36:04.976205098Z" level=info msg="RemoveContainer for \"445d9bde5a2ea6328fc6f51ac0080504d7df1611be0878d2a95d639a1024d761\"" Sep 13 01:36:04.979214 systemd[1]: cri-containerd-fa59df926639780f98424dce57906543bad34580ad9d919a3d71c84f6dc2bf8e.scope: Deactivated successfully. Sep 13 01:36:04.985881 env[1477]: time="2025-09-13T01:36:04.985837768Z" level=info msg="RemoveContainer for \"445d9bde5a2ea6328fc6f51ac0080504d7df1611be0878d2a95d639a1024d761\" returns successfully" Sep 13 01:36:05.006851 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa59df926639780f98424dce57906543bad34580ad9d919a3d71c84f6dc2bf8e-rootfs.mount: Deactivated successfully. Sep 13 01:36:05.024323 env[1477]: time="2025-09-13T01:36:05.024264652Z" level=info msg="shim disconnected" id=fa59df926639780f98424dce57906543bad34580ad9d919a3d71c84f6dc2bf8e Sep 13 01:36:05.024323 env[1477]: time="2025-09-13T01:36:05.024318372Z" level=warning msg="cleaning up after shim disconnected" id=fa59df926639780f98424dce57906543bad34580ad9d919a3d71c84f6dc2bf8e namespace=k8s.io Sep 13 01:36:05.024323 env[1477]: time="2025-09-13T01:36:05.024327172Z" level=info msg="cleaning up dead shim" Sep 13 01:36:05.031675 env[1477]: time="2025-09-13T01:36:05.031622205Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:36:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4335 runtime=io.containerd.runc.v2\n" Sep 13 01:36:05.032038 env[1477]: time="2025-09-13T01:36:05.032007524Z" level=info msg="TearDown network for sandbox \"fa59df926639780f98424dce57906543bad34580ad9d919a3d71c84f6dc2bf8e\" successfully" Sep 13 01:36:05.032038 env[1477]: time="2025-09-13T01:36:05.032039484Z" level=info msg="StopPodSandbox for \"fa59df926639780f98424dce57906543bad34580ad9d919a3d71c84f6dc2bf8e\" returns successfully" Sep 13 01:36:05.194308 kubelet[2435]: I0913 01:36:05.194273 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cilium-run\") pod \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " Sep 13 01:36:05.194308 kubelet[2435]: I0913 01:36:05.194309 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cilium-cgroup\") pod \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " Sep 13 01:36:05.194551 kubelet[2435]: I0913 01:36:05.194323 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-host-proc-sys-net\") pod \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " Sep 13 01:36:05.194551 kubelet[2435]: I0913 01:36:05.194352 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-hostproc\") pod \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " Sep 13 01:36:05.194551 kubelet[2435]: I0913 01:36:05.194367 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-xtables-lock\") pod \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " Sep 13 01:36:05.194551 kubelet[2435]: I0913 01:36:05.194381 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-host-proc-sys-kernel\") pod \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " Sep 13 01:36:05.194551 kubelet[2435]: I0913 01:36:05.194415 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5trr4\" (UniqueName: \"kubernetes.io/projected/3c95e415-bb2d-4c60-bf45-acff8a30d27b-kube-api-access-5trr4\") pod \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " Sep 13 01:36:05.194551 kubelet[2435]: I0913 01:36:05.194432 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cilium-config-path\") pod \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " Sep 13 01:36:05.194551 kubelet[2435]: I0913 01:36:05.194450 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c95e415-bb2d-4c60-bf45-acff8a30d27b-hubble-tls\") pod \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " Sep 13 01:36:05.194551 kubelet[2435]: I0913 01:36:05.194470 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c95e415-bb2d-4c60-bf45-acff8a30d27b-clustermesh-secrets\") pod \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " Sep 13 01:36:05.194551 kubelet[2435]: I0913 01:36:05.194493 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-etc-cni-netd\") pod \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " Sep 13 01:36:05.194551 kubelet[2435]: I0913 01:36:05.194508 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cni-path\") pod \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " Sep 13 01:36:05.194551 kubelet[2435]: I0913 01:36:05.194523 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-lib-modules\") pod \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " Sep 13 01:36:05.194551 kubelet[2435]: I0913 01:36:05.194539 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cilium-ipsec-secrets\") pod \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " Sep 13 01:36:05.194844 kubelet[2435]: I0913 01:36:05.194556 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-bpf-maps\") pod \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\" (UID: \"3c95e415-bb2d-4c60-bf45-acff8a30d27b\") " Sep 13 01:36:05.194844 kubelet[2435]: I0913 01:36:05.194633 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3c95e415-bb2d-4c60-bf45-acff8a30d27b" (UID: "3c95e415-bb2d-4c60-bf45-acff8a30d27b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:36:05.194844 kubelet[2435]: I0913 01:36:05.194237 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3c95e415-bb2d-4c60-bf45-acff8a30d27b" (UID: "3c95e415-bb2d-4c60-bf45-acff8a30d27b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:36:05.194844 kubelet[2435]: I0913 01:36:05.194664 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3c95e415-bb2d-4c60-bf45-acff8a30d27b" (UID: "3c95e415-bb2d-4c60-bf45-acff8a30d27b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:36:05.194844 kubelet[2435]: I0913 01:36:05.194687 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3c95e415-bb2d-4c60-bf45-acff8a30d27b" (UID: "3c95e415-bb2d-4c60-bf45-acff8a30d27b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:36:05.194844 kubelet[2435]: I0913 01:36:05.194702 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-hostproc" (OuterVolumeSpecName: "hostproc") pod "3c95e415-bb2d-4c60-bf45-acff8a30d27b" (UID: "3c95e415-bb2d-4c60-bf45-acff8a30d27b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:36:05.194844 kubelet[2435]: I0913 01:36:05.194715 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3c95e415-bb2d-4c60-bf45-acff8a30d27b" (UID: "3c95e415-bb2d-4c60-bf45-acff8a30d27b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:36:05.194844 kubelet[2435]: I0913 01:36:05.194728 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3c95e415-bb2d-4c60-bf45-acff8a30d27b" (UID: "3c95e415-bb2d-4c60-bf45-acff8a30d27b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:36:05.195093 kubelet[2435]: I0913 01:36:05.195052 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3c95e415-bb2d-4c60-bf45-acff8a30d27b" (UID: "3c95e415-bb2d-4c60-bf45-acff8a30d27b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:36:05.195453 kubelet[2435]: I0913 01:36:05.195432 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cni-path" (OuterVolumeSpecName: "cni-path") pod "3c95e415-bb2d-4c60-bf45-acff8a30d27b" (UID: "3c95e415-bb2d-4c60-bf45-acff8a30d27b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:36:05.195555 kubelet[2435]: I0913 01:36:05.195542 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3c95e415-bb2d-4c60-bf45-acff8a30d27b" (UID: "3c95e415-bb2d-4c60-bf45-acff8a30d27b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:36:05.197246 kubelet[2435]: I0913 01:36:05.197196 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3c95e415-bb2d-4c60-bf45-acff8a30d27b" (UID: "3c95e415-bb2d-4c60-bf45-acff8a30d27b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 01:36:05.199192 systemd[1]: var-lib-kubelet-pods-3c95e415\x2dbb2d\x2d4c60\x2dbf45\x2dacff8a30d27b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5trr4.mount: Deactivated successfully. Sep 13 01:36:05.203732 kubelet[2435]: I0913 01:36:05.203690 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c95e415-bb2d-4c60-bf45-acff8a30d27b-kube-api-access-5trr4" (OuterVolumeSpecName: "kube-api-access-5trr4") pod "3c95e415-bb2d-4c60-bf45-acff8a30d27b" (UID: "3c95e415-bb2d-4c60-bf45-acff8a30d27b"). InnerVolumeSpecName "kube-api-access-5trr4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:36:05.205243 kubelet[2435]: I0913 01:36:05.204968 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c95e415-bb2d-4c60-bf45-acff8a30d27b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3c95e415-bb2d-4c60-bf45-acff8a30d27b" (UID: "3c95e415-bb2d-4c60-bf45-acff8a30d27b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:36:05.205255 systemd[1]: var-lib-kubelet-pods-3c95e415\x2dbb2d\x2d4c60\x2dbf45\x2dacff8a30d27b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 01:36:05.205342 systemd[1]: var-lib-kubelet-pods-3c95e415\x2dbb2d\x2d4c60\x2dbf45\x2dacff8a30d27b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 01:36:05.205831 kubelet[2435]: I0913 01:36:05.205795 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c95e415-bb2d-4c60-bf45-acff8a30d27b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3c95e415-bb2d-4c60-bf45-acff8a30d27b" (UID: "3c95e415-bb2d-4c60-bf45-acff8a30d27b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 01:36:05.206323 kubelet[2435]: I0913 01:36:05.206287 2435 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "3c95e415-bb2d-4c60-bf45-acff8a30d27b" (UID: "3c95e415-bb2d-4c60-bf45-acff8a30d27b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 01:36:05.295201 kubelet[2435]: I0913 01:36:05.295097 2435 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c95e415-bb2d-4c60-bf45-acff8a30d27b-hubble-tls\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:36:05.295201 kubelet[2435]: I0913 01:36:05.295123 2435 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c95e415-bb2d-4c60-bf45-acff8a30d27b-clustermesh-secrets\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:36:05.295201 kubelet[2435]: I0913 01:36:05.295132 2435 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-etc-cni-netd\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:36:05.295201 kubelet[2435]: I0913 01:36:05.295141 2435 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cni-path\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:36:05.295201 kubelet[2435]: I0913 01:36:05.295150 2435 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-lib-modules\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:36:05.295201 kubelet[2435]: I0913 01:36:05.295161 2435 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:36:05.295201 kubelet[2435]: I0913 01:36:05.295170 2435 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-bpf-maps\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:36:05.295201 kubelet[2435]: I0913 01:36:05.295179 2435 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cilium-run\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:36:05.296346 kubelet[2435]: I0913 01:36:05.296323 2435 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cilium-cgroup\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:36:05.296410 kubelet[2435]: I0913 01:36:05.296352 2435 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-host-proc-sys-net\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:36:05.296410 kubelet[2435]: I0913 01:36:05.296362 2435 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-hostproc\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:36:05.296410 kubelet[2435]: I0913 01:36:05.296371 2435 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-xtables-lock\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:36:05.296410 kubelet[2435]: I0913 01:36:05.296380 2435 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c95e415-bb2d-4c60-bf45-acff8a30d27b-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:36:05.296410 kubelet[2435]: I0913 01:36:05.296389 2435 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5trr4\" (UniqueName: \"kubernetes.io/projected/3c95e415-bb2d-4c60-bf45-acff8a30d27b-kube-api-access-5trr4\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:36:05.296410 kubelet[2435]: I0913 01:36:05.296397 2435 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c95e415-bb2d-4c60-bf45-acff8a30d27b-cilium-config-path\") on node \"ci-3510.3.8-n-a3199d6d1b\" DevicePath \"\"" Sep 13 01:36:05.484800 systemd[1]: var-lib-kubelet-pods-3c95e415\x2dbb2d\x2d4c60\x2dbf45\x2dacff8a30d27b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 01:36:05.491351 systemd[1]: Removed slice kubepods-burstable-pod3c95e415_bb2d_4c60_bf45_acff8a30d27b.slice. Sep 13 01:36:05.968892 kubelet[2435]: I0913 01:36:05.968858 2435 scope.go:117] "RemoveContainer" containerID="6e09cbe460cb482b99acc0e4edff106875dbfb247a205b6c82e24d6e090931e3" Sep 13 01:36:05.971405 env[1477]: time="2025-09-13T01:36:05.971370389Z" level=info msg="RemoveContainer for \"6e09cbe460cb482b99acc0e4edff106875dbfb247a205b6c82e24d6e090931e3\"" Sep 13 01:36:05.979475 env[1477]: time="2025-09-13T01:36:05.979423782Z" level=info msg="RemoveContainer for \"6e09cbe460cb482b99acc0e4edff106875dbfb247a205b6c82e24d6e090931e3\" returns successfully" Sep 13 01:36:06.035683 systemd[1]: Created slice kubepods-burstable-podc049eef8_5aec_472a_81f1_8d675a828ea5.slice. Sep 13 01:36:06.201265 kubelet[2435]: I0913 01:36:06.201224 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c049eef8-5aec-472a-81f1-8d675a828ea5-etc-cni-netd\") pod \"cilium-n5nt4\" (UID: \"c049eef8-5aec-472a-81f1-8d675a828ea5\") " pod="kube-system/cilium-n5nt4" Sep 13 01:36:06.201265 kubelet[2435]: I0913 01:36:06.201267 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c049eef8-5aec-472a-81f1-8d675a828ea5-cilium-cgroup\") pod \"cilium-n5nt4\" (UID: \"c049eef8-5aec-472a-81f1-8d675a828ea5\") " pod="kube-system/cilium-n5nt4" Sep 13 01:36:06.201463 kubelet[2435]: I0913 01:36:06.201286 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c049eef8-5aec-472a-81f1-8d675a828ea5-bpf-maps\") pod \"cilium-n5nt4\" (UID: \"c049eef8-5aec-472a-81f1-8d675a828ea5\") " pod="kube-system/cilium-n5nt4" Sep 13 01:36:06.201463 kubelet[2435]: I0913 01:36:06.201300 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c049eef8-5aec-472a-81f1-8d675a828ea5-hostproc\") pod \"cilium-n5nt4\" (UID: \"c049eef8-5aec-472a-81f1-8d675a828ea5\") " pod="kube-system/cilium-n5nt4" Sep 13 01:36:06.201463 kubelet[2435]: I0913 01:36:06.201315 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c049eef8-5aec-472a-81f1-8d675a828ea5-xtables-lock\") pod \"cilium-n5nt4\" (UID: \"c049eef8-5aec-472a-81f1-8d675a828ea5\") " pod="kube-system/cilium-n5nt4" Sep 13 01:36:06.201463 kubelet[2435]: I0913 01:36:06.201341 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c049eef8-5aec-472a-81f1-8d675a828ea5-clustermesh-secrets\") pod \"cilium-n5nt4\" (UID: \"c049eef8-5aec-472a-81f1-8d675a828ea5\") " pod="kube-system/cilium-n5nt4" Sep 13 01:36:06.201463 kubelet[2435]: I0913 01:36:06.201362 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c049eef8-5aec-472a-81f1-8d675a828ea5-cilium-config-path\") pod \"cilium-n5nt4\" (UID: \"c049eef8-5aec-472a-81f1-8d675a828ea5\") " pod="kube-system/cilium-n5nt4" Sep 13 01:36:06.201463 kubelet[2435]: I0913 01:36:06.201377 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c049eef8-5aec-472a-81f1-8d675a828ea5-cilium-ipsec-secrets\") pod \"cilium-n5nt4\" (UID: \"c049eef8-5aec-472a-81f1-8d675a828ea5\") " pod="kube-system/cilium-n5nt4" Sep 13 01:36:06.201463 kubelet[2435]: I0913 01:36:06.201394 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhlmz\" (UniqueName: \"kubernetes.io/projected/c049eef8-5aec-472a-81f1-8d675a828ea5-kube-api-access-mhlmz\") pod \"cilium-n5nt4\" (UID: \"c049eef8-5aec-472a-81f1-8d675a828ea5\") " pod="kube-system/cilium-n5nt4" Sep 13 01:36:06.201463 kubelet[2435]: I0913 01:36:06.201408 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c049eef8-5aec-472a-81f1-8d675a828ea5-cilium-run\") pod \"cilium-n5nt4\" (UID: \"c049eef8-5aec-472a-81f1-8d675a828ea5\") " pod="kube-system/cilium-n5nt4" Sep 13 01:36:06.201463 kubelet[2435]: I0913 01:36:06.201424 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c049eef8-5aec-472a-81f1-8d675a828ea5-host-proc-sys-kernel\") pod \"cilium-n5nt4\" (UID: \"c049eef8-5aec-472a-81f1-8d675a828ea5\") " pod="kube-system/cilium-n5nt4" Sep 13 01:36:06.201463 kubelet[2435]: I0913 01:36:06.201440 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c049eef8-5aec-472a-81f1-8d675a828ea5-hubble-tls\") pod \"cilium-n5nt4\" (UID: \"c049eef8-5aec-472a-81f1-8d675a828ea5\") " pod="kube-system/cilium-n5nt4" Sep 13 01:36:06.201463 kubelet[2435]: I0913 01:36:06.201455 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c049eef8-5aec-472a-81f1-8d675a828ea5-cni-path\") pod \"cilium-n5nt4\" (UID: \"c049eef8-5aec-472a-81f1-8d675a828ea5\") " pod="kube-system/cilium-n5nt4" Sep 13 01:36:06.201748 kubelet[2435]: I0913 01:36:06.201469 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c049eef8-5aec-472a-81f1-8d675a828ea5-lib-modules\") pod \"cilium-n5nt4\" (UID: \"c049eef8-5aec-472a-81f1-8d675a828ea5\") " pod="kube-system/cilium-n5nt4" Sep 13 01:36:06.201748 kubelet[2435]: I0913 01:36:06.201483 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c049eef8-5aec-472a-81f1-8d675a828ea5-host-proc-sys-net\") pod \"cilium-n5nt4\" (UID: \"c049eef8-5aec-472a-81f1-8d675a828ea5\") " pod="kube-system/cilium-n5nt4" Sep 13 01:36:06.340364 env[1477]: time="2025-09-13T01:36:06.339311160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n5nt4,Uid:c049eef8-5aec-472a-81f1-8d675a828ea5,Namespace:kube-system,Attempt:0,}" Sep 13 01:36:06.371938 env[1477]: time="2025-09-13T01:36:06.371864369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:36:06.372115 env[1477]: time="2025-09-13T01:36:06.372092889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:36:06.372228 env[1477]: time="2025-09-13T01:36:06.372207648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:36:06.372521 env[1477]: time="2025-09-13T01:36:06.372480128Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/afd1328c759a9c0d6aad8085cbe370b3304d337bfbe9f899a3a6856efafbd513 pid=4364 runtime=io.containerd.runc.v2 Sep 13 01:36:06.382633 systemd[1]: Started cri-containerd-afd1328c759a9c0d6aad8085cbe370b3304d337bfbe9f899a3a6856efafbd513.scope. Sep 13 01:36:06.410608 env[1477]: time="2025-09-13T01:36:06.410558812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n5nt4,Uid:c049eef8-5aec-472a-81f1-8d675a828ea5,Namespace:kube-system,Attempt:0,} returns sandbox id \"afd1328c759a9c0d6aad8085cbe370b3304d337bfbe9f899a3a6856efafbd513\"" Sep 13 01:36:06.423816 env[1477]: time="2025-09-13T01:36:06.423773519Z" level=info msg="CreateContainer within sandbox \"afd1328c759a9c0d6aad8085cbe370b3304d337bfbe9f899a3a6856efafbd513\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 01:36:06.458282 env[1477]: time="2025-09-13T01:36:06.458234127Z" level=info msg="CreateContainer within sandbox \"afd1328c759a9c0d6aad8085cbe370b3304d337bfbe9f899a3a6856efafbd513\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7b4da0a0ae0decd2da591473bb75930f7846a99ba167d83e0e4f95ce3d3103b8\"" Sep 13 01:36:06.459243 env[1477]: time="2025-09-13T01:36:06.459216086Z" level=info msg="StartContainer for \"7b4da0a0ae0decd2da591473bb75930f7846a99ba167d83e0e4f95ce3d3103b8\"" Sep 13 01:36:06.474322 systemd[1]: Started cri-containerd-7b4da0a0ae0decd2da591473bb75930f7846a99ba167d83e0e4f95ce3d3103b8.scope. Sep 13 01:36:06.518631 env[1477]: time="2025-09-13T01:36:06.513683674Z" level=info msg="StartContainer for \"7b4da0a0ae0decd2da591473bb75930f7846a99ba167d83e0e4f95ce3d3103b8\" returns successfully" Sep 13 01:36:06.535831 systemd[1]: cri-containerd-7b4da0a0ae0decd2da591473bb75930f7846a99ba167d83e0e4f95ce3d3103b8.scope: Deactivated successfully. Sep 13 01:36:06.552800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b4da0a0ae0decd2da591473bb75930f7846a99ba167d83e0e4f95ce3d3103b8-rootfs.mount: Deactivated successfully. Sep 13 01:36:06.576193 env[1477]: time="2025-09-13T01:36:06.576145735Z" level=info msg="shim disconnected" id=7b4da0a0ae0decd2da591473bb75930f7846a99ba167d83e0e4f95ce3d3103b8 Sep 13 01:36:06.576547 env[1477]: time="2025-09-13T01:36:06.576527494Z" level=warning msg="cleaning up after shim disconnected" id=7b4da0a0ae0decd2da591473bb75930f7846a99ba167d83e0e4f95ce3d3103b8 namespace=k8s.io Sep 13 01:36:06.576689 env[1477]: time="2025-09-13T01:36:06.576672574Z" level=info msg="cleaning up dead shim" Sep 13 01:36:06.584271 env[1477]: time="2025-09-13T01:36:06.584230007Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:36:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4449 runtime=io.containerd.runc.v2\n" Sep 13 01:36:06.892037 kubelet[2435]: W0913 01:36:06.891973 2435 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c95e415_bb2d_4c60_bf45_acff8a30d27b.slice/cri-containerd-445d9bde5a2ea6328fc6f51ac0080504d7df1611be0878d2a95d639a1024d761.scope WatchSource:0}: container "445d9bde5a2ea6328fc6f51ac0080504d7df1611be0878d2a95d639a1024d761" in namespace "k8s.io": not found Sep 13 01:36:06.987486 env[1477]: time="2025-09-13T01:36:06.987437064Z" level=info msg="CreateContainer within sandbox \"afd1328c759a9c0d6aad8085cbe370b3304d337bfbe9f899a3a6856efafbd513\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 01:36:07.026697 env[1477]: time="2025-09-13T01:36:07.026648147Z" level=info msg="CreateContainer within sandbox \"afd1328c759a9c0d6aad8085cbe370b3304d337bfbe9f899a3a6856efafbd513\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1373fbcc9131d92d46de5f7bd0df962503da782974c247b72067152d8e725420\"" Sep 13 01:36:07.027488 env[1477]: time="2025-09-13T01:36:07.027463626Z" level=info msg="StartContainer for \"1373fbcc9131d92d46de5f7bd0df962503da782974c247b72067152d8e725420\"" Sep 13 01:36:07.044338 systemd[1]: Started cri-containerd-1373fbcc9131d92d46de5f7bd0df962503da782974c247b72067152d8e725420.scope. Sep 13 01:36:07.075791 env[1477]: time="2025-09-13T01:36:07.075739020Z" level=info msg="StartContainer for \"1373fbcc9131d92d46de5f7bd0df962503da782974c247b72067152d8e725420\" returns successfully" Sep 13 01:36:07.080686 systemd[1]: cri-containerd-1373fbcc9131d92d46de5f7bd0df962503da782974c247b72067152d8e725420.scope: Deactivated successfully. Sep 13 01:36:07.109202 env[1477]: time="2025-09-13T01:36:07.109144748Z" level=info msg="shim disconnected" id=1373fbcc9131d92d46de5f7bd0df962503da782974c247b72067152d8e725420 Sep 13 01:36:07.109202 env[1477]: time="2025-09-13T01:36:07.109191668Z" level=warning msg="cleaning up after shim disconnected" id=1373fbcc9131d92d46de5f7bd0df962503da782974c247b72067152d8e725420 namespace=k8s.io Sep 13 01:36:07.109202 env[1477]: time="2025-09-13T01:36:07.109200908Z" level=info msg="cleaning up dead shim" Sep 13 01:36:07.125448 env[1477]: time="2025-09-13T01:36:07.125401893Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:36:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4511 runtime=io.containerd.runc.v2\n" Sep 13 01:36:07.484946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1373fbcc9131d92d46de5f7bd0df962503da782974c247b72067152d8e725420-rootfs.mount: Deactivated successfully. Sep 13 01:36:07.487393 kubelet[2435]: I0913 01:36:07.487355 2435 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c95e415-bb2d-4c60-bf45-acff8a30d27b" path="/var/lib/kubelet/pods/3c95e415-bb2d-4c60-bf45-acff8a30d27b/volumes" Sep 13 01:36:07.887763 kubelet[2435]: I0913 01:36:07.886575 2435 setters.go:618] "Node became not ready" node="ci-3510.3.8-n-a3199d6d1b" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T01:36:07Z","lastTransitionTime":"2025-09-13T01:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 01:36:07.995314 env[1477]: time="2025-09-13T01:36:07.995232188Z" level=info msg="CreateContainer within sandbox \"afd1328c759a9c0d6aad8085cbe370b3304d337bfbe9f899a3a6856efafbd513\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 01:36:08.035200 env[1477]: time="2025-09-13T01:36:08.035141711Z" level=info msg="CreateContainer within sandbox \"afd1328c759a9c0d6aad8085cbe370b3304d337bfbe9f899a3a6856efafbd513\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dd439f5f1f1e01b84bbd204faf27a7b0c91eea5d1282392eb9753f8296e4644a\"" Sep 13 01:36:08.036044 env[1477]: time="2025-09-13T01:36:08.036018950Z" level=info msg="StartContainer for \"dd439f5f1f1e01b84bbd204faf27a7b0c91eea5d1282392eb9753f8296e4644a\"" Sep 13 01:36:08.054220 systemd[1]: Started cri-containerd-dd439f5f1f1e01b84bbd204faf27a7b0c91eea5d1282392eb9753f8296e4644a.scope. Sep 13 01:36:08.085424 systemd[1]: cri-containerd-dd439f5f1f1e01b84bbd204faf27a7b0c91eea5d1282392eb9753f8296e4644a.scope: Deactivated successfully. Sep 13 01:36:08.101105 env[1477]: time="2025-09-13T01:36:08.101042408Z" level=info msg="StartContainer for \"dd439f5f1f1e01b84bbd204faf27a7b0c91eea5d1282392eb9753f8296e4644a\" returns successfully" Sep 13 01:36:08.135105 env[1477]: time="2025-09-13T01:36:08.135029656Z" level=info msg="shim disconnected" id=dd439f5f1f1e01b84bbd204faf27a7b0c91eea5d1282392eb9753f8296e4644a Sep 13 01:36:08.135105 env[1477]: time="2025-09-13T01:36:08.135102336Z" level=warning msg="cleaning up after shim disconnected" id=dd439f5f1f1e01b84bbd204faf27a7b0c91eea5d1282392eb9753f8296e4644a namespace=k8s.io Sep 13 01:36:08.135105 env[1477]: time="2025-09-13T01:36:08.135111936Z" level=info msg="cleaning up dead shim" Sep 13 01:36:08.143205 env[1477]: time="2025-09-13T01:36:08.143088689Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:36:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4570 runtime=io.containerd.runc.v2\n" Sep 13 01:36:08.484997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd439f5f1f1e01b84bbd204faf27a7b0c91eea5d1282392eb9753f8296e4644a-rootfs.mount: Deactivated successfully. Sep 13 01:36:08.591386 kubelet[2435]: E0913 01:36:08.591349 2435 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 01:36:08.998046 env[1477]: time="2025-09-13T01:36:08.998001760Z" level=info msg="CreateContainer within sandbox \"afd1328c759a9c0d6aad8085cbe370b3304d337bfbe9f899a3a6856efafbd513\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 01:36:09.035542 env[1477]: time="2025-09-13T01:36:09.035471165Z" level=info msg="CreateContainer within sandbox \"afd1328c759a9c0d6aad8085cbe370b3304d337bfbe9f899a3a6856efafbd513\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dd3ce2c53a82bb455b3693e3920efc34b8ee7ccc66633165ba695559cd55057e\"" Sep 13 01:36:09.036346 env[1477]: time="2025-09-13T01:36:09.036318764Z" level=info msg="StartContainer for \"dd3ce2c53a82bb455b3693e3920efc34b8ee7ccc66633165ba695559cd55057e\"" Sep 13 01:36:09.057126 systemd[1]: Started cri-containerd-dd3ce2c53a82bb455b3693e3920efc34b8ee7ccc66633165ba695559cd55057e.scope. Sep 13 01:36:09.082147 systemd[1]: cri-containerd-dd3ce2c53a82bb455b3693e3920efc34b8ee7ccc66633165ba695559cd55057e.scope: Deactivated successfully. Sep 13 01:36:09.088936 env[1477]: time="2025-09-13T01:36:09.088893195Z" level=info msg="StartContainer for \"dd3ce2c53a82bb455b3693e3920efc34b8ee7ccc66633165ba695559cd55057e\" returns successfully" Sep 13 01:36:09.120954 env[1477]: time="2025-09-13T01:36:09.120908204Z" level=info msg="shim disconnected" id=dd3ce2c53a82bb455b3693e3920efc34b8ee7ccc66633165ba695559cd55057e Sep 13 01:36:09.121334 env[1477]: time="2025-09-13T01:36:09.121306844Z" level=warning msg="cleaning up after shim disconnected" id=dd3ce2c53a82bb455b3693e3920efc34b8ee7ccc66633165ba695559cd55057e namespace=k8s.io Sep 13 01:36:09.121449 env[1477]: time="2025-09-13T01:36:09.121433004Z" level=info msg="cleaning up dead shim" Sep 13 01:36:09.129211 env[1477]: time="2025-09-13T01:36:09.129177397Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:36:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4625 runtime=io.containerd.runc.v2\n" Sep 13 01:36:09.485066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd3ce2c53a82bb455b3693e3920efc34b8ee7ccc66633165ba695559cd55057e-rootfs.mount: Deactivated successfully. Sep 13 01:36:10.000570 env[1477]: time="2025-09-13T01:36:10.000527335Z" level=info msg="CreateContainer within sandbox \"afd1328c759a9c0d6aad8085cbe370b3304d337bfbe9f899a3a6856efafbd513\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 01:36:10.001127 kubelet[2435]: W0913 01:36:10.001068 2435 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc049eef8_5aec_472a_81f1_8d675a828ea5.slice/cri-containerd-7b4da0a0ae0decd2da591473bb75930f7846a99ba167d83e0e4f95ce3d3103b8.scope WatchSource:0}: task 7b4da0a0ae0decd2da591473bb75930f7846a99ba167d83e0e4f95ce3d3103b8 not found Sep 13 01:36:10.033198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2390130498.mount: Deactivated successfully. Sep 13 01:36:10.039746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1140578488.mount: Deactivated successfully. Sep 13 01:36:10.051330 env[1477]: time="2025-09-13T01:36:10.051280607Z" level=info msg="CreateContainer within sandbox \"afd1328c759a9c0d6aad8085cbe370b3304d337bfbe9f899a3a6856efafbd513\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cb314f00f7173b3a0f827aacdd15b11d425f6443cde5c007c5155cbabd3baa2b\"" Sep 13 01:36:10.052207 env[1477]: time="2025-09-13T01:36:10.052180486Z" level=info msg="StartContainer for \"cb314f00f7173b3a0f827aacdd15b11d425f6443cde5c007c5155cbabd3baa2b\"" Sep 13 01:36:10.067008 systemd[1]: Started cri-containerd-cb314f00f7173b3a0f827aacdd15b11d425f6443cde5c007c5155cbabd3baa2b.scope. Sep 13 01:36:10.107327 env[1477]: time="2025-09-13T01:36:10.107269074Z" level=info msg="StartContainer for \"cb314f00f7173b3a0f827aacdd15b11d425f6443cde5c007c5155cbabd3baa2b\" returns successfully" Sep 13 01:36:10.562619 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 13 01:36:11.102526 systemd[1]: run-containerd-runc-k8s.io-cb314f00f7173b3a0f827aacdd15b11d425f6443cde5c007c5155cbabd3baa2b-runc.5Iivsy.mount: Deactivated successfully. Sep 13 01:36:13.121090 kubelet[2435]: W0913 01:36:13.121052 2435 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc049eef8_5aec_472a_81f1_8d675a828ea5.slice/cri-containerd-1373fbcc9131d92d46de5f7bd0df962503da782974c247b72067152d8e725420.scope WatchSource:0}: task 1373fbcc9131d92d46de5f7bd0df962503da782974c247b72067152d8e725420 not found Sep 13 01:36:13.152194 systemd-networkd[1640]: lxc_health: Link UP Sep 13 01:36:13.165630 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 01:36:13.165781 systemd-networkd[1640]: lxc_health: Gained carrier Sep 13 01:36:13.268166 systemd[1]: run-containerd-runc-k8s.io-cb314f00f7173b3a0f827aacdd15b11d425f6443cde5c007c5155cbabd3baa2b-runc.wszfre.mount: Deactivated successfully. Sep 13 01:36:14.364495 kubelet[2435]: I0913 01:36:14.364425 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n5nt4" podStartSLOduration=9.364402126 podStartE2EDuration="9.364402126s" podCreationTimestamp="2025-09-13 01:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:36:11.0152961 +0000 UTC m=+227.714837008" watchObservedRunningTime="2025-09-13 01:36:14.364402126 +0000 UTC m=+231.063943034" Sep 13 01:36:14.916820 systemd-networkd[1640]: lxc_health: Gained IPv6LL Sep 13 01:36:16.232017 kubelet[2435]: W0913 01:36:16.231895 2435 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc049eef8_5aec_472a_81f1_8d675a828ea5.slice/cri-containerd-dd439f5f1f1e01b84bbd204faf27a7b0c91eea5d1282392eb9753f8296e4644a.scope WatchSource:0}: task dd439f5f1f1e01b84bbd204faf27a7b0c91eea5d1282392eb9753f8296e4644a not found Sep 13 01:36:17.572113 systemd[1]: run-containerd-runc-k8s.io-cb314f00f7173b3a0f827aacdd15b11d425f6443cde5c007c5155cbabd3baa2b-runc.YOWOLH.mount: Deactivated successfully. Sep 13 01:36:19.337980 kubelet[2435]: W0913 01:36:19.337936 2435 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc049eef8_5aec_472a_81f1_8d675a828ea5.slice/cri-containerd-dd3ce2c53a82bb455b3693e3920efc34b8ee7ccc66633165ba695559cd55057e.scope WatchSource:0}: task dd3ce2c53a82bb455b3693e3920efc34b8ee7ccc66633165ba695559cd55057e not found Sep 13 01:36:19.696888 systemd[1]: run-containerd-runc-k8s.io-cb314f00f7173b3a0f827aacdd15b11d425f6443cde5c007c5155cbabd3baa2b-runc.iWLihJ.mount: Deactivated successfully. Sep 13 01:36:19.837101 sshd[4310]: pam_unix(sshd:session): session closed for user core Sep 13 01:36:19.840298 systemd[1]: sshd@24-10.200.20.47:22-10.200.16.10:57458.service: Deactivated successfully. Sep 13 01:36:19.840483 systemd-logind[1462]: Session 27 logged out. Waiting for processes to exit. Sep 13 01:36:19.841015 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 01:36:19.841903 systemd-logind[1462]: Removed session 27.