Sep 6 01:19:34.034825 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 6 01:19:34.034843 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 5 23:00:12 -00 2025 Sep 6 01:19:34.034852 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 6 01:19:34.034859 kernel: printk: bootconsole [pl11] enabled Sep 6 01:19:34.034864 kernel: efi: EFI v2.70 by EDK II Sep 6 01:19:34.034870 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3761cf98 Sep 6 01:19:34.034877 kernel: random: crng init done Sep 6 01:19:34.034882 kernel: ACPI: Early table checksum verification disabled Sep 6 01:19:34.041948 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 6 01:19:34.041957 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:19:34.041962 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:19:34.041968 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 6 01:19:34.041979 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:19:34.041986 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:19:34.041993 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:19:34.041999 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:19:34.042005 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:19:34.042013 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:19:34.042019 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 6 01:19:34.042025 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:19:34.042031 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 6 01:19:34.042037 kernel: NUMA: Failed to initialise from firmware Sep 6 01:19:34.042043 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Sep 6 01:19:34.042049 kernel: NUMA: NODE_DATA [mem 0x1bf7f3900-0x1bf7f8fff] Sep 6 01:19:34.042055 kernel: Zone ranges: Sep 6 01:19:34.042061 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 6 01:19:34.042067 kernel: DMA32 empty Sep 6 01:19:34.042073 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 6 01:19:34.042080 kernel: Movable zone start for each node Sep 6 01:19:34.042086 kernel: Early memory node ranges Sep 6 01:19:34.042092 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 6 01:19:34.042098 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 6 01:19:34.042104 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 6 01:19:34.042110 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 6 01:19:34.042116 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 6 01:19:34.042121 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 6 01:19:34.042127 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 6 01:19:34.042133 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 6 01:19:34.042139 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 6 01:19:34.042145 kernel: psci: probing for conduit method from ACPI. Sep 6 01:19:34.042155 kernel: psci: PSCIv1.1 detected in firmware. Sep 6 01:19:34.042161 kernel: psci: Using standard PSCI v0.2 function IDs Sep 6 01:19:34.042167 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 6 01:19:34.042174 kernel: psci: SMC Calling Convention v1.4 Sep 6 01:19:34.042180 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Sep 6 01:19:34.042188 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Sep 6 01:19:34.042194 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 6 01:19:34.042200 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 6 01:19:34.042207 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 6 01:19:34.042213 kernel: Detected PIPT I-cache on CPU0 Sep 6 01:19:34.042220 kernel: CPU features: detected: GIC system register CPU interface Sep 6 01:19:34.042226 kernel: CPU features: detected: Hardware dirty bit management Sep 6 01:19:34.042232 kernel: CPU features: detected: Spectre-BHB Sep 6 01:19:34.042238 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 6 01:19:34.042245 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 6 01:19:34.042251 kernel: CPU features: detected: ARM erratum 1418040 Sep 6 01:19:34.042258 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 6 01:19:34.042265 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 6 01:19:34.042271 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 6 01:19:34.042277 kernel: Policy zone: Normal Sep 6 01:19:34.042285 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 01:19:34.042292 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 01:19:34.042298 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 01:19:34.042304 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 01:19:34.042310 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 01:19:34.042317 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Sep 6 01:19:34.042323 kernel: Memory: 3986880K/4194160K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 207280K reserved, 0K cma-reserved) Sep 6 01:19:34.042331 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 01:19:34.042337 kernel: trace event string verifier disabled Sep 6 01:19:34.042343 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 6 01:19:34.042350 kernel: rcu: RCU event tracing is enabled. Sep 6 01:19:34.042357 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 01:19:34.042363 kernel: Trampoline variant of Tasks RCU enabled. Sep 6 01:19:34.042370 kernel: Tracing variant of Tasks RCU enabled. Sep 6 01:19:34.042376 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 01:19:34.042382 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 01:19:34.042388 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 6 01:19:34.042394 kernel: GICv3: 960 SPIs implemented Sep 6 01:19:34.042402 kernel: GICv3: 0 Extended SPIs implemented Sep 6 01:19:34.042408 kernel: GICv3: Distributor has no Range Selector support Sep 6 01:19:34.042414 kernel: Root IRQ handler: gic_handle_irq Sep 6 01:19:34.042421 kernel: GICv3: 16 PPIs implemented Sep 6 01:19:34.042427 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 6 01:19:34.042433 kernel: ITS: No ITS available, not enabling LPIs Sep 6 01:19:34.042439 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 01:19:34.042446 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 6 01:19:34.042452 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 6 01:19:34.042458 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 6 01:19:34.042465 kernel: Console: colour dummy device 80x25 Sep 6 01:19:34.042473 kernel: printk: console [tty1] enabled Sep 6 01:19:34.042480 kernel: ACPI: Core revision 20210730 Sep 6 01:19:34.042487 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 6 01:19:34.042493 kernel: pid_max: default: 32768 minimum: 301 Sep 6 01:19:34.042500 kernel: LSM: Security Framework initializing Sep 6 01:19:34.042506 kernel: SELinux: Initializing. Sep 6 01:19:34.042513 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 01:19:34.042520 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 01:19:34.042527 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 6 01:19:34.042535 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 6 01:19:34.042541 kernel: rcu: Hierarchical SRCU implementation. Sep 6 01:19:34.042548 kernel: Remapping and enabling EFI services. Sep 6 01:19:34.042554 kernel: smp: Bringing up secondary CPUs ... Sep 6 01:19:34.042560 kernel: Detected PIPT I-cache on CPU1 Sep 6 01:19:34.042567 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 6 01:19:34.042574 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 01:19:34.042580 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 6 01:19:34.042587 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 01:19:34.042593 kernel: SMP: Total of 2 processors activated. Sep 6 01:19:34.042601 kernel: CPU features: detected: 32-bit EL0 Support Sep 6 01:19:34.042607 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 6 01:19:34.042614 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 6 01:19:34.042621 kernel: CPU features: detected: CRC32 instructions Sep 6 01:19:34.042627 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 6 01:19:34.042634 kernel: CPU features: detected: LSE atomic instructions Sep 6 01:19:34.042640 kernel: CPU features: detected: Privileged Access Never Sep 6 01:19:34.042647 kernel: CPU: All CPU(s) started at EL1 Sep 6 01:19:34.042653 kernel: alternatives: patching kernel code Sep 6 01:19:34.042661 kernel: devtmpfs: initialized Sep 6 01:19:34.042672 kernel: KASLR enabled Sep 6 01:19:34.042679 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 01:19:34.042687 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 01:19:34.042694 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 01:19:34.042701 kernel: SMBIOS 3.1.0 present. Sep 6 01:19:34.042708 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 6 01:19:34.042715 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 01:19:34.042722 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 6 01:19:34.042730 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 6 01:19:34.042738 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 6 01:19:34.042745 kernel: audit: initializing netlink subsys (disabled) Sep 6 01:19:34.042751 kernel: audit: type=2000 audit(0.090:1): state=initialized audit_enabled=0 res=1 Sep 6 01:19:34.042758 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 01:19:34.042765 kernel: cpuidle: using governor menu Sep 6 01:19:34.042772 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 6 01:19:34.042780 kernel: ASID allocator initialised with 32768 entries Sep 6 01:19:34.042787 kernel: ACPI: bus type PCI registered Sep 6 01:19:34.042794 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 01:19:34.042801 kernel: Serial: AMBA PL011 UART driver Sep 6 01:19:34.042807 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 01:19:34.042814 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 6 01:19:34.042821 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 01:19:34.042828 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 6 01:19:34.042835 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 01:19:34.042843 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 6 01:19:34.042850 kernel: ACPI: Added _OSI(Module Device) Sep 6 01:19:34.042856 kernel: ACPI: Added _OSI(Processor Device) Sep 6 01:19:34.042863 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 01:19:34.042870 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 01:19:34.042877 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 01:19:34.042899 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 01:19:34.042908 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 01:19:34.042915 kernel: ACPI: Interpreter enabled Sep 6 01:19:34.042924 kernel: ACPI: Using GIC for interrupt routing Sep 6 01:19:34.042931 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 6 01:19:34.042938 kernel: printk: console [ttyAMA0] enabled Sep 6 01:19:34.042945 kernel: printk: bootconsole [pl11] disabled Sep 6 01:19:34.042952 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 6 01:19:34.042959 kernel: iommu: Default domain type: Translated Sep 6 01:19:34.042966 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 6 01:19:34.042972 kernel: vgaarb: loaded Sep 6 01:19:34.042979 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 01:19:34.042986 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 01:19:34.042995 kernel: PTP clock support registered Sep 6 01:19:34.043001 kernel: Registered efivars operations Sep 6 01:19:34.043008 kernel: No ACPI PMU IRQ for CPU0 Sep 6 01:19:34.043015 kernel: No ACPI PMU IRQ for CPU1 Sep 6 01:19:34.043022 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 6 01:19:34.043029 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 01:19:34.043036 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 01:19:34.043042 kernel: pnp: PnP ACPI init Sep 6 01:19:34.043049 kernel: pnp: PnP ACPI: found 0 devices Sep 6 01:19:34.043057 kernel: NET: Registered PF_INET protocol family Sep 6 01:19:34.043064 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 01:19:34.043071 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 01:19:34.043078 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 01:19:34.043085 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 01:19:34.043092 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 6 01:19:34.043099 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 01:19:34.043106 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 01:19:34.043114 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 01:19:34.043121 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 01:19:34.043128 kernel: PCI: CLS 0 bytes, default 64 Sep 6 01:19:34.043135 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 6 01:19:34.043142 kernel: kvm [1]: HYP mode not available Sep 6 01:19:34.043149 kernel: Initialise system trusted keyrings Sep 6 01:19:34.043156 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 01:19:34.043163 kernel: Key type asymmetric registered Sep 6 01:19:34.043170 kernel: Asymmetric key parser 'x509' registered Sep 6 01:19:34.043178 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 01:19:34.043185 kernel: io scheduler mq-deadline registered Sep 6 01:19:34.043205 kernel: io scheduler kyber registered Sep 6 01:19:34.043212 kernel: io scheduler bfq registered Sep 6 01:19:34.043219 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 01:19:34.043225 kernel: thunder_xcv, ver 1.0 Sep 6 01:19:34.043232 kernel: thunder_bgx, ver 1.0 Sep 6 01:19:34.043239 kernel: nicpf, ver 1.0 Sep 6 01:19:34.043246 kernel: nicvf, ver 1.0 Sep 6 01:19:34.043375 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 6 01:19:34.043441 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-06T01:19:33 UTC (1757121573) Sep 6 01:19:34.043450 kernel: efifb: probing for efifb Sep 6 01:19:34.043457 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 6 01:19:34.043465 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 6 01:19:34.043471 kernel: efifb: scrolling: redraw Sep 6 01:19:34.043478 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 6 01:19:34.043487 kernel: Console: switching to colour frame buffer device 128x48 Sep 6 01:19:34.043495 kernel: fb0: EFI VGA frame buffer device Sep 6 01:19:34.043502 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 6 01:19:34.043509 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 01:19:34.043516 kernel: NET: Registered PF_INET6 protocol family Sep 6 01:19:34.043522 kernel: Segment Routing with IPv6 Sep 6 01:19:34.043529 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 01:19:34.043536 kernel: NET: Registered PF_PACKET protocol family Sep 6 01:19:34.043543 kernel: Key type dns_resolver registered Sep 6 01:19:34.043549 kernel: registered taskstats version 1 Sep 6 01:19:34.043556 kernel: Loading compiled-in X.509 certificates Sep 6 01:19:34.043565 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 72ab5ba99c2368429c7a4d04fccfc5a39dd84386' Sep 6 01:19:34.043572 kernel: Key type .fscrypt registered Sep 6 01:19:34.043578 kernel: Key type fscrypt-provisioning registered Sep 6 01:19:34.043585 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 01:19:34.043592 kernel: ima: Allocated hash algorithm: sha1 Sep 6 01:19:34.043599 kernel: ima: No architecture policies found Sep 6 01:19:34.043605 kernel: clk: Disabling unused clocks Sep 6 01:19:34.043612 kernel: Freeing unused kernel memory: 36416K Sep 6 01:19:34.043620 kernel: Run /init as init process Sep 6 01:19:34.043627 kernel: with arguments: Sep 6 01:19:34.043634 kernel: /init Sep 6 01:19:34.043640 kernel: with environment: Sep 6 01:19:34.043647 kernel: HOME=/ Sep 6 01:19:34.043654 kernel: TERM=linux Sep 6 01:19:34.043661 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 01:19:34.043670 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 01:19:34.043680 systemd[1]: Detected virtualization microsoft. Sep 6 01:19:34.043688 systemd[1]: Detected architecture arm64. Sep 6 01:19:34.043695 systemd[1]: Running in initrd. Sep 6 01:19:34.043702 systemd[1]: No hostname configured, using default hostname. Sep 6 01:19:34.043709 systemd[1]: Hostname set to . Sep 6 01:19:34.043717 systemd[1]: Initializing machine ID from random generator. Sep 6 01:19:34.043724 systemd[1]: Queued start job for default target initrd.target. Sep 6 01:19:34.043731 systemd[1]: Started systemd-ask-password-console.path. Sep 6 01:19:34.043739 systemd[1]: Reached target cryptsetup.target. Sep 6 01:19:34.043746 systemd[1]: Reached target paths.target. Sep 6 01:19:34.043753 systemd[1]: Reached target slices.target. Sep 6 01:19:34.043760 systemd[1]: Reached target swap.target. Sep 6 01:19:34.043768 systemd[1]: Reached target timers.target. Sep 6 01:19:34.043775 systemd[1]: Listening on iscsid.socket. Sep 6 01:19:34.043784 systemd[1]: Listening on iscsiuio.socket. Sep 6 01:19:34.043793 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 01:19:34.043804 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 01:19:34.043812 systemd[1]: Listening on systemd-journald.socket. Sep 6 01:19:34.043821 systemd[1]: Listening on systemd-networkd.socket. Sep 6 01:19:34.043829 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 01:19:34.043838 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 01:19:34.043846 systemd[1]: Reached target sockets.target. Sep 6 01:19:34.043854 systemd[1]: Starting kmod-static-nodes.service... Sep 6 01:19:34.043863 systemd[1]: Finished network-cleanup.service. Sep 6 01:19:34.043872 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 01:19:34.043881 systemd[1]: Starting systemd-journald.service... Sep 6 01:19:34.043903 systemd[1]: Starting systemd-modules-load.service... Sep 6 01:19:34.043912 systemd[1]: Starting systemd-resolved.service... Sep 6 01:19:34.043924 systemd-journald[276]: Journal started Sep 6 01:19:34.043973 systemd-journald[276]: Runtime Journal (/run/log/journal/bf1a728b16da43b695cb761e8b8aaaab) is 8.0M, max 78.5M, 70.5M free. Sep 6 01:19:34.038355 systemd-modules-load[277]: Inserted module 'overlay' Sep 6 01:19:34.068979 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 01:19:34.073711 systemd-resolved[278]: Positive Trust Anchors: Sep 6 01:19:34.096428 systemd[1]: Started systemd-journald.service. Sep 6 01:19:34.096452 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 01:19:34.073725 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 01:19:34.114805 kernel: Bridge firewalling registered Sep 6 01:19:34.073753 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 01:19:34.191222 kernel: audit: type=1130 audit(1757121574.157:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.191250 kernel: SCSI subsystem initialized Sep 6 01:19:34.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.085507 systemd-resolved[278]: Defaulting to hostname 'linux'. Sep 6 01:19:34.231768 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 01:19:34.231793 kernel: audit: type=1130 audit(1757121574.207:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.231803 kernel: device-mapper: uevent: version 1.0.3 Sep 6 01:19:34.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.116329 systemd-modules-load[277]: Inserted module 'br_netfilter' Sep 6 01:19:34.157853 systemd[1]: Started systemd-resolved.service. Sep 6 01:19:34.275987 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 01:19:34.276010 kernel: audit: type=1130 audit(1757121574.242:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.207974 systemd[1]: Finished kmod-static-nodes.service. Sep 6 01:19:34.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.243455 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 01:19:34.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.269068 systemd-modules-load[277]: Inserted module 'dm_multipath' Sep 6 01:19:34.357614 kernel: audit: type=1130 audit(1757121574.278:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.357635 kernel: audit: type=1130 audit(1757121574.306:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.357644 kernel: audit: type=1130 audit(1757121574.333:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.278902 systemd[1]: Finished systemd-modules-load.service. Sep 6 01:19:34.307017 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 01:19:34.333955 systemd[1]: Reached target nss-lookup.target. Sep 6 01:19:34.367007 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 01:19:34.379037 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:19:34.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.390688 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 01:19:34.463973 kernel: audit: type=1130 audit(1757121574.413:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.464003 kernel: audit: type=1130 audit(1757121574.436:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.403051 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:19:34.419227 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 01:19:34.497652 kernel: audit: type=1130 audit(1757121574.465:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.438128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 01:19:34.466999 systemd[1]: Starting dracut-cmdline.service... Sep 6 01:19:34.513624 dracut-cmdline[298]: dracut-dracut-053 Sep 6 01:19:34.518136 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 01:19:34.587933 kernel: Loading iSCSI transport class v2.0-870. Sep 6 01:19:34.602920 kernel: iscsi: registered transport (tcp) Sep 6 01:19:34.625709 kernel: iscsi: registered transport (qla4xxx) Sep 6 01:19:34.625743 kernel: QLogic iSCSI HBA Driver Sep 6 01:19:34.662419 systemd[1]: Finished dracut-cmdline.service. Sep 6 01:19:34.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:34.668258 systemd[1]: Starting dracut-pre-udev.service... Sep 6 01:19:34.725903 kernel: raid6: neonx8 gen() 13803 MB/s Sep 6 01:19:34.742906 kernel: raid6: neonx8 xor() 10830 MB/s Sep 6 01:19:34.762896 kernel: raid6: neonx4 gen() 13539 MB/s Sep 6 01:19:34.784910 kernel: raid6: neonx4 xor() 11145 MB/s Sep 6 01:19:34.804898 kernel: raid6: neonx2 gen() 12927 MB/s Sep 6 01:19:34.825895 kernel: raid6: neonx2 xor() 10254 MB/s Sep 6 01:19:34.846896 kernel: raid6: neonx1 gen() 10461 MB/s Sep 6 01:19:34.867898 kernel: raid6: neonx1 xor() 8786 MB/s Sep 6 01:19:34.887894 kernel: raid6: int64x8 gen() 6273 MB/s Sep 6 01:19:34.909895 kernel: raid6: int64x8 xor() 3545 MB/s Sep 6 01:19:34.930894 kernel: raid6: int64x4 gen() 7229 MB/s Sep 6 01:19:34.950894 kernel: raid6: int64x4 xor() 3856 MB/s Sep 6 01:19:34.972895 kernel: raid6: int64x2 gen() 6152 MB/s Sep 6 01:19:34.993893 kernel: raid6: int64x2 xor() 3319 MB/s Sep 6 01:19:35.013894 kernel: raid6: int64x1 gen() 5046 MB/s Sep 6 01:19:35.040009 kernel: raid6: int64x1 xor() 2646 MB/s Sep 6 01:19:35.040024 kernel: raid6: using algorithm neonx8 gen() 13803 MB/s Sep 6 01:19:35.040032 kernel: raid6: .... xor() 10830 MB/s, rmw enabled Sep 6 01:19:35.044766 kernel: raid6: using neon recovery algorithm Sep 6 01:19:35.066698 kernel: xor: measuring software checksum speed Sep 6 01:19:35.066711 kernel: 8regs : 17209 MB/sec Sep 6 01:19:35.070950 kernel: 32regs : 20681 MB/sec Sep 6 01:19:35.074878 kernel: arm64_neon : 27917 MB/sec Sep 6 01:19:35.074893 kernel: xor: using function: arm64_neon (27917 MB/sec) Sep 6 01:19:35.136901 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 6 01:19:35.146402 systemd[1]: Finished dracut-pre-udev.service. Sep 6 01:19:35.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:35.155000 audit: BPF prog-id=7 op=LOAD Sep 6 01:19:35.155000 audit: BPF prog-id=8 op=LOAD Sep 6 01:19:35.155790 systemd[1]: Starting systemd-udevd.service... Sep 6 01:19:35.207229 systemd-udevd[474]: Using default interface naming scheme 'v252'. Sep 6 01:19:35.215423 systemd[1]: Started systemd-udevd.service. Sep 6 01:19:35.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:35.227179 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 01:19:35.239637 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation Sep 6 01:19:35.269840 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 01:19:35.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:35.275413 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 01:19:35.312639 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 01:19:35.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:35.367006 kernel: hv_vmbus: Vmbus version:5.3 Sep 6 01:19:35.379908 kernel: hv_vmbus: registering driver hid_hyperv Sep 6 01:19:35.396911 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 6 01:19:35.396964 kernel: hv_vmbus: registering driver hv_netvsc Sep 6 01:19:35.396978 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 6 01:19:35.405602 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 6 01:19:35.409907 kernel: hv_vmbus: registering driver hv_storvsc Sep 6 01:19:35.425453 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 6 01:19:35.433902 kernel: scsi host0: storvsc_host_t Sep 6 01:19:35.434076 kernel: scsi host1: storvsc_host_t Sep 6 01:19:35.434099 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 6 01:19:35.447906 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 6 01:19:35.465682 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 6 01:19:35.473229 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 6 01:19:35.473250 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 6 01:19:35.501690 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 6 01:19:35.501790 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 6 01:19:35.501873 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 6 01:19:35.501981 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 6 01:19:35.502068 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 6 01:19:35.502155 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 01:19:35.502169 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 6 01:19:35.517918 kernel: hv_netvsc 000d3af9-3edc-000d-3af9-3edc000d3af9 eth0: VF slot 1 added Sep 6 01:19:35.524910 kernel: hv_vmbus: registering driver hv_pci Sep 6 01:19:35.538761 kernel: hv_pci 548cb192-3dc8-425f-9f02-7f17dc3c661b: PCI VMBus probing: Using version 0x10004 Sep 6 01:19:35.615848 kernel: hv_pci 548cb192-3dc8-425f-9f02-7f17dc3c661b: PCI host bridge to bus 3dc8:00 Sep 6 01:19:35.615971 kernel: pci_bus 3dc8:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 6 01:19:35.616076 kernel: pci_bus 3dc8:00: No busn resource found for root bus, will use [bus 00-ff] Sep 6 01:19:35.616161 kernel: pci 3dc8:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 6 01:19:35.616260 kernel: pci 3dc8:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 6 01:19:35.616342 kernel: pci 3dc8:00:02.0: enabling Extended Tags Sep 6 01:19:35.616426 kernel: pci 3dc8:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 3dc8:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 6 01:19:35.616507 kernel: pci_bus 3dc8:00: busn_res: [bus 00-ff] end is updated to 00 Sep 6 01:19:35.616581 kernel: pci 3dc8:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 6 01:19:35.653491 kernel: mlx5_core 3dc8:00:02.0: enabling device (0000 -> 0002) Sep 6 01:19:35.891618 kernel: mlx5_core 3dc8:00:02.0: firmware version: 16.30.1284 Sep 6 01:19:35.891734 kernel: mlx5_core 3dc8:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Sep 6 01:19:35.891817 kernel: hv_netvsc 000d3af9-3edc-000d-3af9-3edc000d3af9 eth0: VF registering: eth1 Sep 6 01:19:35.891925 kernel: mlx5_core 3dc8:00:02.0 eth1: joined to eth0 Sep 6 01:19:35.882492 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 01:19:35.903914 kernel: mlx5_core 3dc8:00:02.0 enP15816s1: renamed from eth1 Sep 6 01:19:35.934914 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (537) Sep 6 01:19:35.947847 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 01:19:36.054382 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 01:19:36.060381 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 01:19:36.073402 systemd[1]: Starting disk-uuid.service... Sep 6 01:19:36.105793 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 01:19:37.104356 disk-uuid[598]: The operation has completed successfully. Sep 6 01:19:37.109377 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 01:19:37.166319 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 01:19:37.169025 systemd[1]: Finished disk-uuid.service. Sep 6 01:19:37.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:37.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:37.181294 systemd[1]: Starting verity-setup.service... Sep 6 01:19:37.222973 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 6 01:19:37.383483 systemd[1]: Found device dev-mapper-usr.device. Sep 6 01:19:37.389183 systemd[1]: Mounting sysusr-usr.mount... Sep 6 01:19:37.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:37.396924 systemd[1]: Finished verity-setup.service. Sep 6 01:19:37.458856 systemd[1]: Mounted sysusr-usr.mount. Sep 6 01:19:37.467173 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 01:19:37.463111 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 01:19:37.463882 systemd[1]: Starting ignition-setup.service... Sep 6 01:19:37.480918 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 01:19:37.508007 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 01:19:37.508069 kernel: BTRFS info (device sda6): using free space tree Sep 6 01:19:37.512710 kernel: BTRFS info (device sda6): has skinny extents Sep 6 01:19:37.564754 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 01:19:37.568770 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 01:19:37.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:37.578000 audit: BPF prog-id=9 op=LOAD Sep 6 01:19:37.579842 systemd[1]: Starting systemd-networkd.service... Sep 6 01:19:37.607643 systemd-networkd[875]: lo: Link UP Sep 6 01:19:37.607651 systemd-networkd[875]: lo: Gained carrier Sep 6 01:19:37.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:37.608094 systemd-networkd[875]: Enumeration completed Sep 6 01:19:37.608452 systemd[1]: Started systemd-networkd.service. Sep 6 01:19:37.616982 systemd[1]: Reached target network.target. Sep 6 01:19:37.621407 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:19:37.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:37.631117 systemd[1]: Starting iscsiuio.service... Sep 6 01:19:37.664449 iscsid[880]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 01:19:37.664449 iscsid[880]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 6 01:19:37.664449 iscsid[880]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 01:19:37.664449 iscsid[880]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 01:19:37.664449 iscsid[880]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 01:19:37.664449 iscsid[880]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 01:19:37.664449 iscsid[880]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 01:19:37.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:37.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:37.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:37.641345 systemd[1]: Started iscsiuio.service. Sep 6 01:19:37.659624 systemd[1]: Starting iscsid.service... Sep 6 01:19:37.677353 systemd[1]: Started iscsid.service. Sep 6 01:19:37.682929 systemd[1]: Starting dracut-initqueue.service... Sep 6 01:19:37.725085 systemd[1]: Finished ignition-setup.service. Sep 6 01:19:37.730393 systemd[1]: Finished dracut-initqueue.service. Sep 6 01:19:37.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:37.741546 systemd[1]: Reached target remote-fs-pre.target. Sep 6 01:19:37.752916 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 01:19:37.773112 systemd[1]: Reached target remote-fs.target. Sep 6 01:19:37.782351 systemd[1]: Starting dracut-pre-mount.service... Sep 6 01:19:37.790678 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 01:19:37.810304 systemd[1]: Finished dracut-pre-mount.service. Sep 6 01:19:37.860933 kernel: mlx5_core 3dc8:00:02.0 enP15816s1: Link up Sep 6 01:19:37.906191 kernel: hv_netvsc 000d3af9-3edc-000d-3af9-3edc000d3af9 eth0: Data path switched to VF: enP15816s1 Sep 6 01:19:37.906917 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 01:19:37.906455 systemd-networkd[875]: enP15816s1: Link UP Sep 6 01:19:37.906564 systemd-networkd[875]: eth0: Link UP Sep 6 01:19:37.906696 systemd-networkd[875]: eth0: Gained carrier Sep 6 01:19:37.915108 systemd-networkd[875]: enP15816s1: Gained carrier Sep 6 01:19:37.933968 systemd-networkd[875]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 6 01:19:39.128022 systemd-networkd[875]: eth0: Gained IPv6LL Sep 6 01:19:40.011305 ignition[891]: Ignition 2.14.0 Sep 6 01:19:40.011319 ignition[891]: Stage: fetch-offline Sep 6 01:19:40.011386 ignition[891]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:19:40.011410 ignition[891]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:19:40.074547 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:19:40.074715 ignition[891]: parsed url from cmdline: "" Sep 6 01:19:40.081385 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 01:19:40.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:40.074719 ignition[891]: no config URL provided Sep 6 01:19:40.117279 kernel: kauditd_printk_skb: 18 callbacks suppressed Sep 6 01:19:40.117308 kernel: audit: type=1130 audit(1757121580.087:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:40.096713 systemd[1]: Starting ignition-fetch.service... Sep 6 01:19:40.074724 ignition[891]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 01:19:40.074733 ignition[891]: no config at "/usr/lib/ignition/user.ign" Sep 6 01:19:40.074738 ignition[891]: failed to fetch config: resource requires networking Sep 6 01:19:40.075197 ignition[891]: Ignition finished successfully Sep 6 01:19:40.106653 ignition[901]: Ignition 2.14.0 Sep 6 01:19:40.106659 ignition[901]: Stage: fetch Sep 6 01:19:40.106750 ignition[901]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:19:40.106770 ignition[901]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:19:40.110353 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:19:40.111339 ignition[901]: parsed url from cmdline: "" Sep 6 01:19:40.111343 ignition[901]: no config URL provided Sep 6 01:19:40.111350 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 01:19:40.111365 ignition[901]: no config at "/usr/lib/ignition/user.ign" Sep 6 01:19:40.111404 ignition[901]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 6 01:19:40.253460 ignition[901]: GET result: OK Sep 6 01:19:40.253567 ignition[901]: config has been read from IMDS userdata Sep 6 01:19:40.256653 unknown[901]: fetched base config from "system" Sep 6 01:19:40.253613 ignition[901]: parsing config with SHA512: bc0d8ecede807a3b23d5c66ffde6091ded1892343f81c5869f4c1d883cdd83b825518481ec73a2687c0eed9fe17ad2cafd334c4e2d14e81178055e1166245d71 Sep 6 01:19:40.256659 unknown[901]: fetched base config from "system" Sep 6 01:19:40.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:40.257213 ignition[901]: fetch: fetch complete Sep 6 01:19:40.301072 kernel: audit: type=1130 audit(1757121580.270:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:40.256664 unknown[901]: fetched user config from "azure" Sep 6 01:19:40.257219 ignition[901]: fetch: fetch passed Sep 6 01:19:40.263211 systemd[1]: Finished ignition-fetch.service. Sep 6 01:19:40.337188 kernel: audit: type=1130 audit(1757121580.313:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:40.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:40.257259 ignition[901]: Ignition finished successfully Sep 6 01:19:40.291408 systemd[1]: Starting ignition-kargs.service... Sep 6 01:19:40.368979 kernel: audit: type=1130 audit(1757121580.346:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:40.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:40.300110 ignition[908]: Ignition 2.14.0 Sep 6 01:19:40.309163 systemd[1]: Finished ignition-kargs.service. Sep 6 01:19:40.300116 ignition[908]: Stage: kargs Sep 6 01:19:40.314609 systemd[1]: Starting ignition-disks.service... Sep 6 01:19:40.300224 ignition[908]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:19:40.342474 systemd[1]: Finished ignition-disks.service. Sep 6 01:19:40.300242 ignition[908]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:19:40.347115 systemd[1]: Reached target initrd-root-device.target. Sep 6 01:19:40.302793 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:19:40.371400 systemd[1]: Reached target local-fs-pre.target. Sep 6 01:19:40.304907 ignition[908]: kargs: kargs passed Sep 6 01:19:40.379432 systemd[1]: Reached target local-fs.target. Sep 6 01:19:40.304961 ignition[908]: Ignition finished successfully Sep 6 01:19:40.387868 systemd[1]: Reached target sysinit.target. Sep 6 01:19:40.324637 ignition[915]: Ignition 2.14.0 Sep 6 01:19:40.394978 systemd[1]: Reached target basic.target. Sep 6 01:19:40.324644 ignition[915]: Stage: disks Sep 6 01:19:40.403781 systemd[1]: Starting systemd-fsck-root.service... Sep 6 01:19:40.324756 ignition[915]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:19:40.324775 ignition[915]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:19:40.470279 systemd-fsck[923]: ROOT: clean, 629/7326000 files, 481083/7359488 blocks Sep 6 01:19:40.327756 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:19:40.340222 ignition[915]: disks: disks passed Sep 6 01:19:40.340304 ignition[915]: Ignition finished successfully Sep 6 01:19:40.494058 systemd[1]: Finished systemd-fsck-root.service. Sep 6 01:19:40.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:40.499463 systemd[1]: Mounting sysroot.mount... Sep 6 01:19:40.524776 kernel: audit: type=1130 audit(1757121580.498:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:40.534901 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 01:19:40.535508 systemd[1]: Mounted sysroot.mount. Sep 6 01:19:40.540058 systemd[1]: Reached target initrd-root-fs.target. Sep 6 01:19:40.581292 systemd[1]: Mounting sysroot-usr.mount... Sep 6 01:19:40.585977 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 6 01:19:40.593505 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 01:19:40.593539 systemd[1]: Reached target ignition-diskful.target. Sep 6 01:19:40.599371 systemd[1]: Mounted sysroot-usr.mount. Sep 6 01:19:40.662994 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 01:19:40.668486 systemd[1]: Starting initrd-setup-root.service... Sep 6 01:19:40.698420 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (933) Sep 6 01:19:40.698470 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 01:19:40.698481 initrd-setup-root[938]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 01:19:40.710061 kernel: BTRFS info (device sda6): using free space tree Sep 6 01:19:40.710082 kernel: BTRFS info (device sda6): has skinny extents Sep 6 01:19:40.718637 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 01:19:40.729591 initrd-setup-root[964]: cut: /sysroot/etc/group: No such file or directory Sep 6 01:19:40.753972 initrd-setup-root[972]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 01:19:40.763169 initrd-setup-root[980]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 01:19:41.168695 systemd[1]: Finished initrd-setup-root.service. Sep 6 01:19:41.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:41.174368 systemd[1]: Starting ignition-mount.service... Sep 6 01:19:41.209398 kernel: audit: type=1130 audit(1757121581.173:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:41.197292 systemd[1]: Starting sysroot-boot.service... Sep 6 01:19:41.203919 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 6 01:19:41.204050 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 6 01:19:41.235406 systemd[1]: Finished sysroot-boot.service. Sep 6 01:19:41.261922 kernel: audit: type=1130 audit(1757121581.241:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:41.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:41.262478 ignition[1001]: INFO : Ignition 2.14.0 Sep 6 01:19:41.262478 ignition[1001]: INFO : Stage: mount Sep 6 01:19:41.272339 ignition[1001]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:19:41.272339 ignition[1001]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:19:41.272339 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:19:41.272339 ignition[1001]: INFO : mount: mount passed Sep 6 01:19:41.272339 ignition[1001]: INFO : Ignition finished successfully Sep 6 01:19:41.328537 kernel: audit: type=1130 audit(1757121581.283:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:41.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:41.279074 systemd[1]: Finished ignition-mount.service. Sep 6 01:19:41.748980 coreos-metadata[932]: Sep 06 01:19:41.748 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 6 01:19:41.759533 coreos-metadata[932]: Sep 06 01:19:41.759 INFO Fetch successful Sep 6 01:19:41.793259 coreos-metadata[932]: Sep 06 01:19:41.793 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 6 01:19:41.816345 coreos-metadata[932]: Sep 06 01:19:41.816 INFO Fetch successful Sep 6 01:19:41.831962 coreos-metadata[932]: Sep 06 01:19:41.831 INFO wrote hostname ci-3510.3.8-n-4d72badcbe to /sysroot/etc/hostname Sep 6 01:19:41.841734 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 6 01:19:41.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:41.847917 systemd[1]: Starting ignition-files.service... Sep 6 01:19:41.875538 kernel: audit: type=1130 audit(1757121581.846:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:41.874651 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 01:19:41.892900 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1011) Sep 6 01:19:41.905289 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 01:19:41.905316 kernel: BTRFS info (device sda6): using free space tree Sep 6 01:19:41.905326 kernel: BTRFS info (device sda6): has skinny extents Sep 6 01:19:41.914319 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 01:19:41.928409 ignition[1030]: INFO : Ignition 2.14.0 Sep 6 01:19:41.928409 ignition[1030]: INFO : Stage: files Sep 6 01:19:41.939611 ignition[1030]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:19:41.939611 ignition[1030]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:19:41.939611 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:19:41.939611 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Sep 6 01:19:41.939611 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 01:19:41.939611 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 01:19:42.023346 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 01:19:42.032003 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 01:19:42.044135 unknown[1030]: wrote ssh authorized keys file for user: core Sep 6 01:19:42.050405 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 01:19:42.058398 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 6 01:19:42.058398 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 6 01:19:42.109693 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 6 01:19:42.222626 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 6 01:19:42.235101 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 01:19:42.235101 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 6 01:19:42.403537 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 01:19:42.486200 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 01:19:42.495833 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 6 01:19:42.495833 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 01:19:42.495833 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 01:19:42.495833 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 01:19:42.495833 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 01:19:42.495833 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 01:19:42.495833 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 01:19:42.495833 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 01:19:42.495833 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 01:19:42.495833 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 01:19:42.495833 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 6 01:19:42.495833 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 6 01:19:42.495833 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 6 01:19:42.495833 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 01:19:42.495833 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem584060838" Sep 6 01:19:42.660127 ignition[1030]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem584060838": device or resource busy Sep 6 01:19:42.660127 ignition[1030]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem584060838", trying btrfs: device or resource busy Sep 6 01:19:42.660127 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem584060838" Sep 6 01:19:42.660127 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem584060838" Sep 6 01:19:42.660127 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem584060838" Sep 6 01:19:42.660127 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem584060838" Sep 6 01:19:42.660127 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 6 01:19:42.660127 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 01:19:42.660127 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 01:19:42.660127 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2666811214" Sep 6 01:19:42.660127 ignition[1030]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2666811214": device or resource busy Sep 6 01:19:42.660127 ignition[1030]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2666811214", trying btrfs: device or resource busy Sep 6 01:19:42.660127 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2666811214" Sep 6 01:19:42.660127 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2666811214" Sep 6 01:19:42.499083 systemd[1]: mnt-oem584060838.mount: Deactivated successfully. Sep 6 01:19:42.821554 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem2666811214" Sep 6 01:19:42.821554 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem2666811214" Sep 6 01:19:42.821554 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 01:19:42.821554 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 6 01:19:42.821554 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 6 01:19:42.541280 systemd[1]: mnt-oem2666811214.mount: Deactivated successfully. Sep 6 01:19:43.090076 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Sep 6 01:19:44.009175 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 6 01:19:44.023377 ignition[1030]: INFO : files: op(14): [started] processing unit "waagent.service" Sep 6 01:19:44.023377 ignition[1030]: INFO : files: op(14): [finished] processing unit "waagent.service" Sep 6 01:19:44.023377 ignition[1030]: INFO : files: op(15): [started] processing unit "nvidia.service" Sep 6 01:19:44.023377 ignition[1030]: INFO : files: op(15): [finished] processing unit "nvidia.service" Sep 6 01:19:44.023377 ignition[1030]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Sep 6 01:19:44.023377 ignition[1030]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 01:19:44.023377 ignition[1030]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 01:19:44.023377 ignition[1030]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Sep 6 01:19:44.023377 ignition[1030]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Sep 6 01:19:44.156391 kernel: audit: type=1130 audit(1757121584.046:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.034346 systemd[1]: Finished ignition-files.service. Sep 6 01:19:44.166524 ignition[1030]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Sep 6 01:19:44.166524 ignition[1030]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Sep 6 01:19:44.166524 ignition[1030]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 01:19:44.166524 ignition[1030]: INFO : files: op(1a): [started] setting preset to enabled for "waagent.service" Sep 6 01:19:44.166524 ignition[1030]: INFO : files: op(1a): [finished] setting preset to enabled for "waagent.service" Sep 6 01:19:44.166524 ignition[1030]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 01:19:44.166524 ignition[1030]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 01:19:44.166524 ignition[1030]: INFO : files: files passed Sep 6 01:19:44.166524 ignition[1030]: INFO : Ignition finished successfully Sep 6 01:19:44.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.073848 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 01:19:44.079647 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 01:19:44.294650 initrd-setup-root-after-ignition[1055]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 01:19:44.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.080481 systemd[1]: Starting ignition-quench.service... Sep 6 01:19:44.100575 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 01:19:44.100684 systemd[1]: Finished ignition-quench.service. Sep 6 01:19:44.141171 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 01:19:44.147715 systemd[1]: Reached target ignition-complete.target. Sep 6 01:19:44.162857 systemd[1]: Starting initrd-parse-etc.service... Sep 6 01:19:44.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.194949 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 01:19:44.195078 systemd[1]: Finished initrd-parse-etc.service. Sep 6 01:19:44.208042 systemd[1]: Reached target initrd-fs.target. Sep 6 01:19:44.221505 systemd[1]: Reached target initrd.target. Sep 6 01:19:44.235362 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 01:19:44.236268 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 01:19:44.283076 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 01:19:44.300441 systemd[1]: Starting initrd-cleanup.service... Sep 6 01:19:44.319935 systemd[1]: Stopped target nss-lookup.target. Sep 6 01:19:44.328036 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 01:19:44.337447 systemd[1]: Stopped target timers.target. Sep 6 01:19:44.347927 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 01:19:44.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.348046 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 01:19:44.358635 systemd[1]: Stopped target initrd.target. Sep 6 01:19:44.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.368431 systemd[1]: Stopped target basic.target. Sep 6 01:19:44.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.379489 systemd[1]: Stopped target ignition-complete.target. Sep 6 01:19:44.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.389830 systemd[1]: Stopped target ignition-diskful.target. Sep 6 01:19:44.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.400262 systemd[1]: Stopped target initrd-root-device.target. Sep 6 01:19:44.410405 systemd[1]: Stopped target remote-fs.target. Sep 6 01:19:44.424093 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 01:19:44.563300 iscsid[880]: iscsid shutting down. Sep 6 01:19:44.572606 ignition[1068]: INFO : Ignition 2.14.0 Sep 6 01:19:44.572606 ignition[1068]: INFO : Stage: umount Sep 6 01:19:44.572606 ignition[1068]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:19:44.572606 ignition[1068]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:19:44.572606 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:19:44.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.434683 systemd[1]: Stopped target sysinit.target. Sep 6 01:19:44.634972 ignition[1068]: INFO : umount: umount passed Sep 6 01:19:44.634972 ignition[1068]: INFO : Ignition finished successfully Sep 6 01:19:44.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.444869 systemd[1]: Stopped target local-fs.target. Sep 6 01:19:44.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.455365 systemd[1]: Stopped target local-fs-pre.target. Sep 6 01:19:44.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.465486 systemd[1]: Stopped target swap.target. Sep 6 01:19:44.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.475109 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 01:19:44.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.475222 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 01:19:44.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.484674 systemd[1]: Stopped target cryptsetup.target. Sep 6 01:19:44.495278 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 01:19:44.495373 systemd[1]: Stopped dracut-initqueue.service. Sep 6 01:19:44.505374 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 01:19:44.505472 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 01:19:44.515957 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 01:19:44.516043 systemd[1]: Stopped ignition-files.service. Sep 6 01:19:44.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.525671 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 6 01:19:44.525762 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 6 01:19:44.537664 systemd[1]: Stopping ignition-mount.service... Sep 6 01:19:44.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.553099 systemd[1]: Stopping iscsid.service... Sep 6 01:19:44.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.568169 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 01:19:44.568322 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 01:19:44.580857 systemd[1]: Stopping sysroot-boot.service... Sep 6 01:19:44.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.590835 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 01:19:44.591027 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 01:19:44.605143 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 01:19:44.605282 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 01:19:44.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.625954 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 01:19:44.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.626638 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 01:19:44.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.890000 audit: BPF prog-id=6 op=UNLOAD Sep 6 01:19:44.626730 systemd[1]: Stopped iscsid.service. Sep 6 01:19:44.639290 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 01:19:44.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.639381 systemd[1]: Stopped ignition-mount.service. Sep 6 01:19:44.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.653112 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 01:19:44.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.653224 systemd[1]: Stopped ignition-disks.service. Sep 6 01:19:44.662455 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 01:19:44.662547 systemd[1]: Stopped ignition-kargs.service. Sep 6 01:19:44.672266 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 01:19:44.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.672357 systemd[1]: Stopped ignition-fetch.service. Sep 6 01:19:44.681801 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 01:19:44.681911 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 01:19:45.011192 kernel: hv_netvsc 000d3af9-3edc-000d-3af9-3edc000d3af9 eth0: Data path switched from VF: enP15816s1 Sep 6 01:19:45.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:45.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.693067 systemd[1]: Stopped target paths.target. Sep 6 01:19:45.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.706675 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 01:19:44.709911 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 01:19:44.719266 systemd[1]: Stopped target slices.target. Sep 6 01:19:45.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.729537 systemd[1]: Stopped target sockets.target. Sep 6 01:19:45.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:45.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.742994 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 01:19:44.743089 systemd[1]: Closed iscsid.socket. Sep 6 01:19:44.753562 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 01:19:44.753663 systemd[1]: Stopped ignition-setup.service. Sep 6 01:19:44.766720 systemd[1]: Stopping iscsiuio.service... Sep 6 01:19:44.784062 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 01:19:44.784165 systemd[1]: Stopped iscsiuio.service. Sep 6 01:19:44.793589 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 01:19:44.793672 systemd[1]: Stopped sysroot-boot.service. Sep 6 01:19:44.802772 systemd[1]: Stopped target network.target. Sep 6 01:19:44.811199 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 01:19:44.811236 systemd[1]: Closed iscsiuio.socket. Sep 6 01:19:44.821470 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 01:19:44.821516 systemd[1]: Stopped initrd-setup-root.service. Sep 6 01:19:44.831515 systemd[1]: Stopping systemd-networkd.service... Sep 6 01:19:44.840467 systemd[1]: Stopping systemd-resolved.service... Sep 6 01:19:45.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:44.846953 systemd-networkd[875]: eth0: DHCPv6 lease lost Sep 6 01:19:45.178623 kernel: kauditd_printk_skb: 40 callbacks suppressed Sep 6 01:19:45.178646 kernel: audit: type=1131 audit(1757121585.131:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:45.178657 kernel: audit: type=1334 audit(1757121585.137:80): prog-id=9 op=UNLOAD Sep 6 01:19:45.137000 audit: BPF prog-id=9 op=UNLOAD Sep 6 01:19:44.858283 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 01:19:44.858392 systemd[1]: Stopped systemd-resolved.service. Sep 6 01:19:44.869996 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 01:19:44.870079 systemd[1]: Stopped systemd-networkd.service. Sep 6 01:19:44.880031 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 01:19:44.880111 systemd[1]: Finished initrd-cleanup.service. Sep 6 01:19:44.891354 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 01:19:44.891396 systemd[1]: Closed systemd-networkd.socket. Sep 6 01:19:45.235802 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Sep 6 01:19:44.900921 systemd[1]: Stopping network-cleanup.service... Sep 6 01:19:44.908159 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 01:19:44.908218 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 01:19:44.913478 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 01:19:44.913534 systemd[1]: Stopped systemd-sysctl.service. Sep 6 01:19:44.926945 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 01:19:44.926992 systemd[1]: Stopped systemd-modules-load.service. Sep 6 01:19:44.934487 systemd[1]: Stopping systemd-udevd.service... Sep 6 01:19:44.945816 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 01:19:44.954592 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 01:19:44.954742 systemd[1]: Stopped systemd-udevd.service. Sep 6 01:19:44.964744 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 01:19:44.964784 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 01:19:44.974179 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 01:19:44.974216 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 01:19:44.982535 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 01:19:44.982579 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 01:19:45.002363 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 01:19:45.002417 systemd[1]: Stopped dracut-cmdline.service. Sep 6 01:19:45.006974 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 01:19:45.007014 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 01:19:45.019449 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 01:19:45.031521 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 01:19:45.031595 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 01:19:45.042452 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 01:19:45.042543 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 01:19:45.121076 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 01:19:45.121186 systemd[1]: Stopped network-cleanup.service. Sep 6 01:19:45.132034 systemd[1]: Reached target initrd-switch-root.target. Sep 6 01:19:45.184615 systemd[1]: Starting initrd-switch-root.service... Sep 6 01:19:45.198090 systemd[1]: Switching root. Sep 6 01:19:45.236603 systemd-journald[276]: Journal stopped Sep 6 01:19:54.912148 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 01:19:54.912170 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 01:19:54.912181 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 01:19:54.912190 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 01:19:54.912198 kernel: SELinux: policy capability open_perms=1 Sep 6 01:19:54.912206 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 01:19:54.912216 kernel: SELinux: policy capability always_check_network=0 Sep 6 01:19:54.912224 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 01:19:54.912232 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 01:19:54.912240 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 01:19:54.912248 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 01:19:54.912257 kernel: audit: type=1403 audit(1757121587.011:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 01:19:54.912268 systemd[1]: Successfully loaded SELinux policy in 255.366ms. Sep 6 01:19:54.912278 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.912ms. Sep 6 01:19:54.912289 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 01:19:54.912299 systemd[1]: Detected virtualization microsoft. Sep 6 01:19:54.912308 systemd[1]: Detected architecture arm64. Sep 6 01:19:54.912317 systemd[1]: Detected first boot. Sep 6 01:19:54.912327 systemd[1]: Hostname set to . Sep 6 01:19:54.912336 systemd[1]: Initializing machine ID from random generator. Sep 6 01:19:54.912346 kernel: audit: type=1400 audit(1757121587.765:82): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 01:19:54.912355 kernel: audit: type=1400 audit(1757121587.768:83): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 01:19:54.912365 kernel: audit: type=1334 audit(1757121587.788:84): prog-id=10 op=LOAD Sep 6 01:19:54.912374 kernel: audit: type=1334 audit(1757121587.788:85): prog-id=10 op=UNLOAD Sep 6 01:19:54.912382 kernel: audit: type=1334 audit(1757121587.810:86): prog-id=11 op=LOAD Sep 6 01:19:54.912391 kernel: audit: type=1334 audit(1757121587.810:87): prog-id=11 op=UNLOAD Sep 6 01:19:54.912400 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 01:19:54.912410 kernel: audit: type=1400 audit(1757121588.842:88): avc: denied { associate } for pid=1101 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 01:19:54.912419 systemd[1]: Populated /etc with preset unit settings. Sep 6 01:19:54.912430 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:19:54.912440 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:19:54.912450 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:19:54.912459 kernel: kauditd_printk_skb: 8 callbacks suppressed Sep 6 01:19:54.912468 kernel: audit: type=1334 audit(1757121594.100:90): prog-id=12 op=LOAD Sep 6 01:19:54.912476 kernel: audit: type=1334 audit(1757121594.100:91): prog-id=3 op=UNLOAD Sep 6 01:19:54.912485 kernel: audit: type=1334 audit(1757121594.107:92): prog-id=13 op=LOAD Sep 6 01:19:54.912495 kernel: audit: type=1334 audit(1757121594.115:93): prog-id=14 op=LOAD Sep 6 01:19:54.912503 kernel: audit: type=1334 audit(1757121594.115:94): prog-id=4 op=UNLOAD Sep 6 01:19:54.912512 kernel: audit: type=1334 audit(1757121594.115:95): prog-id=5 op=UNLOAD Sep 6 01:19:54.912521 kernel: audit: type=1334 audit(1757121594.122:96): prog-id=15 op=LOAD Sep 6 01:19:54.912532 kernel: audit: type=1334 audit(1757121594.122:97): prog-id=12 op=UNLOAD Sep 6 01:19:54.912541 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 01:19:54.912550 kernel: audit: type=1334 audit(1757121594.129:98): prog-id=16 op=LOAD Sep 6 01:19:54.912560 systemd[1]: Stopped initrd-switch-root.service. Sep 6 01:19:54.912571 kernel: audit: type=1334 audit(1757121594.136:99): prog-id=17 op=LOAD Sep 6 01:19:54.912580 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 01:19:54.912590 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 01:19:54.912599 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 01:19:54.912609 systemd[1]: Created slice system-getty.slice. Sep 6 01:19:54.912618 systemd[1]: Created slice system-modprobe.slice. Sep 6 01:19:54.912628 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 01:19:54.912637 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 01:19:54.912647 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 01:19:54.912657 systemd[1]: Created slice user.slice. Sep 6 01:19:54.912666 systemd[1]: Started systemd-ask-password-console.path. Sep 6 01:19:54.912676 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 01:19:54.912685 systemd[1]: Set up automount boot.automount. Sep 6 01:19:54.912694 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 01:19:54.912704 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 01:19:54.912713 systemd[1]: Stopped target initrd-fs.target. Sep 6 01:19:54.912723 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 01:19:54.912732 systemd[1]: Reached target integritysetup.target. Sep 6 01:19:54.912743 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 01:19:54.912752 systemd[1]: Reached target remote-fs.target. Sep 6 01:19:54.912762 systemd[1]: Reached target slices.target. Sep 6 01:19:54.912771 systemd[1]: Reached target swap.target. Sep 6 01:19:54.912780 systemd[1]: Reached target torcx.target. Sep 6 01:19:54.912790 systemd[1]: Reached target veritysetup.target. Sep 6 01:19:54.912799 systemd[1]: Listening on systemd-coredump.socket. Sep 6 01:19:54.912810 systemd[1]: Listening on systemd-initctl.socket. Sep 6 01:19:54.912820 systemd[1]: Listening on systemd-networkd.socket. Sep 6 01:19:54.912830 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 01:19:54.912839 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 01:19:54.912848 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 01:19:54.912859 systemd[1]: Mounting dev-hugepages.mount... Sep 6 01:19:54.912869 systemd[1]: Mounting dev-mqueue.mount... Sep 6 01:19:54.912878 systemd[1]: Mounting media.mount... Sep 6 01:19:54.912897 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 01:19:54.912907 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 01:19:54.912917 systemd[1]: Mounting tmp.mount... Sep 6 01:19:54.912926 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 01:19:54.912936 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:19:54.912946 systemd[1]: Starting kmod-static-nodes.service... Sep 6 01:19:54.912956 systemd[1]: Starting modprobe@configfs.service... Sep 6 01:19:54.912966 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:19:54.912976 systemd[1]: Starting modprobe@drm.service... Sep 6 01:19:54.912985 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:19:54.912995 systemd[1]: Starting modprobe@fuse.service... Sep 6 01:19:54.913004 systemd[1]: Starting modprobe@loop.service... Sep 6 01:19:54.913014 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 01:19:54.913024 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 01:19:54.913034 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 01:19:54.913045 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 01:19:54.913055 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 01:19:54.913064 systemd[1]: Stopped systemd-journald.service. Sep 6 01:19:54.913074 systemd[1]: systemd-journald.service: Consumed 3.129s CPU time. Sep 6 01:19:54.913083 systemd[1]: Starting systemd-journald.service... Sep 6 01:19:54.913093 kernel: fuse: init (API version 7.34) Sep 6 01:19:54.913103 kernel: loop: module loaded Sep 6 01:19:54.913112 systemd[1]: Starting systemd-modules-load.service... Sep 6 01:19:54.913122 systemd[1]: Starting systemd-network-generator.service... Sep 6 01:19:54.913132 systemd[1]: Starting systemd-remount-fs.service... Sep 6 01:19:54.913142 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 01:19:54.913151 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 01:19:54.913161 systemd[1]: Stopped verity-setup.service. Sep 6 01:19:54.913170 systemd[1]: Mounted dev-hugepages.mount. Sep 6 01:19:54.913183 systemd-journald[1203]: Journal started Sep 6 01:19:54.913220 systemd-journald[1203]: Runtime Journal (/run/log/journal/dafdd03920ef4efb8442a43f9f41d93d) is 8.0M, max 78.5M, 70.5M free. Sep 6 01:19:47.011000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 01:19:47.765000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 01:19:47.768000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 01:19:47.788000 audit: BPF prog-id=10 op=LOAD Sep 6 01:19:47.788000 audit: BPF prog-id=10 op=UNLOAD Sep 6 01:19:47.810000 audit: BPF prog-id=11 op=LOAD Sep 6 01:19:47.810000 audit: BPF prog-id=11 op=UNLOAD Sep 6 01:19:48.842000 audit[1101]: AVC avc: denied { associate } for pid=1101 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 01:19:48.842000 audit[1101]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40000222fc a1=40000283d8 a2=4000026840 a3=32 items=0 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:19:48.842000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 01:19:48.851000 audit[1101]: AVC avc: denied { associate } for pid=1101 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 01:19:48.851000 audit[1101]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000223d9 a2=1ed a3=0 items=2 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:19:48.851000 audit: CWD cwd="/" Sep 6 01:19:48.851000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:19:48.851000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:19:48.851000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 01:19:54.100000 audit: BPF prog-id=12 op=LOAD Sep 6 01:19:54.100000 audit: BPF prog-id=3 op=UNLOAD Sep 6 01:19:54.107000 audit: BPF prog-id=13 op=LOAD Sep 6 01:19:54.115000 audit: BPF prog-id=14 op=LOAD Sep 6 01:19:54.115000 audit: BPF prog-id=4 op=UNLOAD Sep 6 01:19:54.115000 audit: BPF prog-id=5 op=UNLOAD Sep 6 01:19:54.122000 audit: BPF prog-id=15 op=LOAD Sep 6 01:19:54.122000 audit: BPF prog-id=12 op=UNLOAD Sep 6 01:19:54.129000 audit: BPF prog-id=16 op=LOAD Sep 6 01:19:54.136000 audit: BPF prog-id=17 op=LOAD Sep 6 01:19:54.136000 audit: BPF prog-id=13 op=UNLOAD Sep 6 01:19:54.136000 audit: BPF prog-id=14 op=UNLOAD Sep 6 01:19:54.144000 audit: BPF prog-id=18 op=LOAD Sep 6 01:19:54.144000 audit: BPF prog-id=15 op=UNLOAD Sep 6 01:19:54.151000 audit: BPF prog-id=19 op=LOAD Sep 6 01:19:54.159000 audit: BPF prog-id=20 op=LOAD Sep 6 01:19:54.159000 audit: BPF prog-id=16 op=UNLOAD Sep 6 01:19:54.159000 audit: BPF prog-id=17 op=UNLOAD Sep 6 01:19:54.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.187000 audit: BPF prog-id=18 op=UNLOAD Sep 6 01:19:54.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.806000 audit: BPF prog-id=21 op=LOAD Sep 6 01:19:54.807000 audit: BPF prog-id=22 op=LOAD Sep 6 01:19:54.807000 audit: BPF prog-id=23 op=LOAD Sep 6 01:19:54.807000 audit: BPF prog-id=19 op=UNLOAD Sep 6 01:19:54.807000 audit: BPF prog-id=20 op=UNLOAD Sep 6 01:19:54.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.909000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 01:19:54.909000 audit[1203]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffdd92c830 a2=4000 a3=1 items=0 ppid=1 pid=1203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:19:54.909000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 01:19:54.100494 systemd[1]: Queued start job for default target multi-user.target. Sep 6 01:19:48.803285 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:19:54.100506 systemd[1]: Unnecessary job was removed for dev-sda6.device. Sep 6 01:19:48.803642 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:48Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 01:19:54.160338 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 01:19:48.803660 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:48Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 01:19:54.160718 systemd[1]: systemd-journald.service: Consumed 3.129s CPU time. Sep 6 01:19:48.803699 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:48Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 01:19:48.803709 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:48Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 01:19:48.803739 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:48Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 01:19:48.803751 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:48Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 01:19:48.803981 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:48Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 01:19:48.804016 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:48Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 01:19:48.804027 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:48Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 01:19:48.829618 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:48Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 01:19:48.829658 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:48Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 01:19:48.829680 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:48Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 01:19:48.829694 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:48Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 01:19:48.829714 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:48Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 01:19:48.829727 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:48Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 01:19:53.174174 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:53Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 01:19:53.174435 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:53Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 01:19:53.174537 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:53Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 01:19:53.174693 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:53Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 01:19:53.174741 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:53Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 01:19:53.174794 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-06T01:19:53Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 01:19:54.931146 systemd[1]: Started systemd-journald.service. Sep 6 01:19:54.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.931972 systemd[1]: Mounted dev-mqueue.mount. Sep 6 01:19:54.936940 systemd[1]: Mounted media.mount. Sep 6 01:19:54.941241 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 01:19:54.946487 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 01:19:54.952299 systemd[1]: Mounted tmp.mount. Sep 6 01:19:54.956668 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 01:19:54.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.962488 systemd[1]: Finished kmod-static-nodes.service. Sep 6 01:19:54.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.968410 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 01:19:54.968554 systemd[1]: Finished modprobe@configfs.service. Sep 6 01:19:54.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.974066 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:19:54.974182 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:19:54.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.979427 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 01:19:54.979581 systemd[1]: Finished modprobe@drm.service. Sep 6 01:19:54.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.985333 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:19:54.985464 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:19:54.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.991085 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 01:19:54.991197 systemd[1]: Finished modprobe@fuse.service. Sep 6 01:19:54.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.996608 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:19:54.996780 systemd[1]: Finished modprobe@loop.service. Sep 6 01:19:55.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:55.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:55.002460 systemd[1]: Finished systemd-modules-load.service. Sep 6 01:19:55.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:55.008395 systemd[1]: Finished systemd-network-generator.service. Sep 6 01:19:55.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:55.014795 systemd[1]: Finished systemd-remount-fs.service. Sep 6 01:19:55.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:55.020795 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 01:19:55.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:55.027322 systemd[1]: Reached target network-pre.target. Sep 6 01:19:55.034204 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 01:19:55.040362 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 01:19:55.044921 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 01:19:55.046405 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 01:19:55.052504 systemd[1]: Starting systemd-journal-flush.service... Sep 6 01:19:55.057811 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:19:55.058860 systemd[1]: Starting systemd-random-seed.service... Sep 6 01:19:55.064161 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:19:55.065209 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:19:55.070950 systemd[1]: Starting systemd-sysusers.service... Sep 6 01:19:55.076815 systemd[1]: Starting systemd-udev-settle.service... Sep 6 01:19:55.083742 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 01:19:55.089462 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 01:19:55.098929 udevadm[1221]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 01:19:55.110011 systemd[1]: Finished systemd-random-seed.service. Sep 6 01:19:55.116574 systemd-journald[1203]: Time spent on flushing to /var/log/journal/dafdd03920ef4efb8442a43f9f41d93d is 16.992ms for 1099 entries. Sep 6 01:19:55.116574 systemd-journald[1203]: System Journal (/var/log/journal/dafdd03920ef4efb8442a43f9f41d93d) is 8.0M, max 2.6G, 2.6G free. Sep 6 01:19:55.175336 systemd-journald[1203]: Received client request to flush runtime journal. Sep 6 01:19:55.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:55.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:55.124772 systemd[1]: Reached target first-boot-complete.target. Sep 6 01:19:55.163373 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:19:55.176263 systemd[1]: Finished systemd-journal-flush.service. Sep 6 01:19:55.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:55.554222 systemd[1]: Finished systemd-sysusers.service. Sep 6 01:19:55.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:55.969636 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 01:19:55.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:55.975000 audit: BPF prog-id=24 op=LOAD Sep 6 01:19:55.975000 audit: BPF prog-id=25 op=LOAD Sep 6 01:19:55.975000 audit: BPF prog-id=7 op=UNLOAD Sep 6 01:19:55.975000 audit: BPF prog-id=8 op=UNLOAD Sep 6 01:19:55.976858 systemd[1]: Starting systemd-udevd.service... Sep 6 01:19:55.995993 systemd-udevd[1224]: Using default interface naming scheme 'v252'. Sep 6 01:19:56.125531 systemd[1]: Started systemd-udevd.service. Sep 6 01:19:56.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:56.146966 systemd[1]: Starting systemd-networkd.service... Sep 6 01:19:56.145000 audit: BPF prog-id=26 op=LOAD Sep 6 01:19:56.169631 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Sep 6 01:19:56.220000 audit: BPF prog-id=27 op=LOAD Sep 6 01:19:56.220000 audit: BPF prog-id=28 op=LOAD Sep 6 01:19:56.220000 audit: BPF prog-id=29 op=LOAD Sep 6 01:19:56.221772 systemd[1]: Starting systemd-userdbd.service... Sep 6 01:19:56.241000 audit[1229]: AVC avc: denied { confidentiality } for pid=1229 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 01:19:56.264267 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 01:19:56.264344 kernel: hv_vmbus: registering driver hv_balloon Sep 6 01:19:56.264360 kernel: hv_vmbus: registering driver hyperv_fb Sep 6 01:19:56.264374 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 6 01:19:56.269913 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 6 01:19:56.279837 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 6 01:19:56.292725 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 6 01:19:56.292820 kernel: Console: switching to colour dummy device 80x25 Sep 6 01:19:56.295913 kernel: Console: switching to colour frame buffer device 128x48 Sep 6 01:19:56.241000 audit[1229]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaae76526e0 a1=aa2c a2=ffffa1b324b0 a3=aaaae75b3010 items=12 ppid=1224 pid=1229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:19:56.241000 audit: CWD cwd="/" Sep 6 01:19:56.241000 audit: PATH item=0 name=(null) inode=5648 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:19:56.241000 audit: PATH item=1 name=(null) inode=11195 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:19:56.241000 audit: PATH item=2 name=(null) inode=11195 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:19:56.241000 audit: PATH item=3 name=(null) inode=11196 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:19:56.241000 audit: PATH item=4 name=(null) inode=11195 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:19:56.241000 audit: PATH item=5 name=(null) inode=11197 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:19:56.241000 audit: PATH item=6 name=(null) inode=11195 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:19:56.241000 audit: PATH item=7 name=(null) inode=11198 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:19:56.241000 audit: PATH item=8 name=(null) inode=11195 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:19:56.241000 audit: PATH item=9 name=(null) inode=11199 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:19:56.241000 audit: PATH item=10 name=(null) inode=11195 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:19:56.241000 audit: PATH item=11 name=(null) inode=11200 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:19:56.241000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 01:19:56.309811 systemd[1]: Started systemd-userdbd.service. Sep 6 01:19:56.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:56.343196 kernel: hv_utils: Registering HyperV Utility Driver Sep 6 01:19:56.343264 kernel: hv_vmbus: registering driver hv_utils Sep 6 01:19:56.347187 kernel: hv_utils: Heartbeat IC version 3.0 Sep 6 01:19:56.347320 kernel: hv_utils: Shutdown IC version 3.2 Sep 6 01:19:56.355973 kernel: hv_utils: TimeSync IC version 4.0 Sep 6 01:19:56.624051 systemd-networkd[1245]: lo: Link UP Sep 6 01:19:56.624520 systemd-networkd[1245]: lo: Gained carrier Sep 6 01:19:56.625050 systemd-networkd[1245]: Enumeration completed Sep 6 01:19:56.625292 systemd[1]: Started systemd-networkd.service. Sep 6 01:19:56.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:56.632788 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 01:19:56.639537 systemd[1]: Finished systemd-udev-settle.service. Sep 6 01:19:56.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:56.646809 systemd[1]: Starting lvm2-activation-early.service... Sep 6 01:19:56.650608 systemd-networkd[1245]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:19:56.653808 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 01:19:56.707141 kernel: mlx5_core 3dc8:00:02.0 enP15816s1: Link up Sep 6 01:19:56.735144 kernel: hv_netvsc 000d3af9-3edc-000d-3af9-3edc000d3af9 eth0: Data path switched to VF: enP15816s1 Sep 6 01:19:56.736020 systemd-networkd[1245]: enP15816s1: Link UP Sep 6 01:19:56.736284 systemd-networkd[1245]: eth0: Link UP Sep 6 01:19:56.736348 systemd-networkd[1245]: eth0: Gained carrier Sep 6 01:19:56.747366 systemd-networkd[1245]: enP15816s1: Gained carrier Sep 6 01:19:56.757234 systemd-networkd[1245]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 6 01:19:56.910950 lvm[1301]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 01:19:56.951089 systemd[1]: Finished lvm2-activation-early.service. Sep 6 01:19:56.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:56.957077 systemd[1]: Reached target cryptsetup.target. Sep 6 01:19:56.963506 systemd[1]: Starting lvm2-activation.service... Sep 6 01:19:56.967602 lvm[1304]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 01:19:56.990133 systemd[1]: Finished lvm2-activation.service. Sep 6 01:19:56.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:56.995198 systemd[1]: Reached target local-fs-pre.target. Sep 6 01:19:57.000696 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 01:19:57.000723 systemd[1]: Reached target local-fs.target. Sep 6 01:19:57.005428 systemd[1]: Reached target machines.target. Sep 6 01:19:57.011309 systemd[1]: Starting ldconfig.service... Sep 6 01:19:57.015880 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:19:57.015944 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:19:57.017217 systemd[1]: Starting systemd-boot-update.service... Sep 6 01:19:57.023383 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 01:19:57.032322 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 01:19:57.039579 systemd[1]: Starting systemd-sysext.service... Sep 6 01:19:57.063692 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1306 (bootctl) Sep 6 01:19:57.065029 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 01:19:57.455905 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 01:19:57.757168 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 01:19:57.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:57.768585 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 01:19:57.768780 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 01:19:57.816121 kernel: loop0: detected capacity change from 0 to 211168 Sep 6 01:19:57.857130 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 01:19:57.873147 kernel: loop1: detected capacity change from 0 to 211168 Sep 6 01:19:57.877696 (sd-sysext)[1318]: Using extensions 'kubernetes'. Sep 6 01:19:57.878588 (sd-sysext)[1318]: Merged extensions into '/usr'. Sep 6 01:19:57.888746 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 01:19:57.889357 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 01:19:57.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:57.901757 systemd[1]: Mounting usr-share-oem.mount... Sep 6 01:19:57.906582 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:19:57.907909 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:19:57.914002 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:19:57.920334 systemd[1]: Starting modprobe@loop.service... Sep 6 01:19:57.924832 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:19:57.924964 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:19:57.927461 systemd[1]: Mounted usr-share-oem.mount. Sep 6 01:19:57.932679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:19:57.932853 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:19:57.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:57.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:57.938736 systemd[1]: Finished systemd-sysext.service. Sep 6 01:19:57.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:57.944432 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:19:57.944564 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:19:57.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:57.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:57.950559 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:19:57.950675 systemd[1]: Finished modprobe@loop.service. Sep 6 01:19:57.956366 systemd-fsck[1314]: fsck.fat 4.2 (2021-01-31) Sep 6 01:19:57.956366 systemd-fsck[1314]: /dev/sda1: 236 files, 117310/258078 clusters Sep 6 01:19:57.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:57.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:57.959957 systemd[1]: Starting ensure-sysext.service... Sep 6 01:19:57.964808 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:19:57.964884 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:19:57.966013 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 01:19:57.974900 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 01:19:57.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:57.987008 systemd[1]: Mounting boot.mount... Sep 6 01:19:57.991463 systemd[1]: Reloading. Sep 6 01:19:58.030411 /usr/lib/systemd/system-generators/torcx-generator[1349]: time="2025-09-06T01:19:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:19:58.035805 /usr/lib/systemd/system-generators/torcx-generator[1349]: time="2025-09-06T01:19:58Z" level=info msg="torcx already run" Sep 6 01:19:58.125524 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:19:58.125543 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:19:58.129171 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 01:19:58.141615 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:19:58.199775 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 01:19:58.205000 audit: BPF prog-id=30 op=LOAD Sep 6 01:19:58.205000 audit: BPF prog-id=31 op=LOAD Sep 6 01:19:58.205000 audit: BPF prog-id=24 op=UNLOAD Sep 6 01:19:58.205000 audit: BPF prog-id=25 op=UNLOAD Sep 6 01:19:58.206000 audit: BPF prog-id=32 op=LOAD Sep 6 01:19:58.206000 audit: BPF prog-id=27 op=UNLOAD Sep 6 01:19:58.206000 audit: BPF prog-id=33 op=LOAD Sep 6 01:19:58.206000 audit: BPF prog-id=34 op=LOAD Sep 6 01:19:58.206000 audit: BPF prog-id=28 op=UNLOAD Sep 6 01:19:58.206000 audit: BPF prog-id=29 op=UNLOAD Sep 6 01:19:58.208000 audit: BPF prog-id=35 op=LOAD Sep 6 01:19:58.208000 audit: BPF prog-id=21 op=UNLOAD Sep 6 01:19:58.208000 audit: BPF prog-id=36 op=LOAD Sep 6 01:19:58.208000 audit: BPF prog-id=37 op=LOAD Sep 6 01:19:58.208000 audit: BPF prog-id=22 op=UNLOAD Sep 6 01:19:58.208000 audit: BPF prog-id=23 op=UNLOAD Sep 6 01:19:58.208000 audit: BPF prog-id=38 op=LOAD Sep 6 01:19:58.208000 audit: BPF prog-id=26 op=UNLOAD Sep 6 01:19:58.212953 systemd[1]: Mounted boot.mount. Sep 6 01:19:58.229269 systemd[1]: Finished systemd-boot-update.service. Sep 6 01:19:58.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.241969 systemd[1]: Finished ensure-sysext.service. Sep 6 01:19:58.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.249512 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:19:58.251060 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:19:58.257080 systemd[1]: Starting modprobe@drm.service... Sep 6 01:19:58.263449 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:19:58.269474 systemd[1]: Starting modprobe@loop.service... Sep 6 01:19:58.274227 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:19:58.274380 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:19:58.275000 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:19:58.275326 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:19:58.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.282820 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 01:19:58.283515 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 01:19:58.283749 systemd[1]: Finished modprobe@drm.service. Sep 6 01:19:58.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.289248 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:19:58.289456 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:19:58.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.296935 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:19:58.297160 systemd[1]: Finished modprobe@loop.service. Sep 6 01:19:58.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.302790 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:19:58.302900 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:19:58.627244 systemd-networkd[1245]: eth0: Gained IPv6LL Sep 6 01:19:58.630284 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 01:19:58.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.693763 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 01:19:58.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.700842 systemd[1]: Starting audit-rules.service... Sep 6 01:19:58.706545 systemd[1]: Starting clean-ca-certificates.service... Sep 6 01:19:58.712647 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 01:19:58.717000 audit: BPF prog-id=39 op=LOAD Sep 6 01:19:58.719941 systemd[1]: Starting systemd-resolved.service... Sep 6 01:19:58.726000 audit: BPF prog-id=40 op=LOAD Sep 6 01:19:58.728934 systemd[1]: Starting systemd-timesyncd.service... Sep 6 01:19:58.735465 systemd[1]: Starting systemd-update-utmp.service... Sep 6 01:19:58.741689 systemd[1]: Finished clean-ca-certificates.service. Sep 6 01:19:58.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.750314 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 01:19:58.767000 audit[1419]: SYSTEM_BOOT pid=1419 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.771253 systemd[1]: Finished systemd-update-utmp.service. Sep 6 01:19:58.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.831967 systemd[1]: Started systemd-timesyncd.service. Sep 6 01:19:58.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.838169 systemd[1]: Reached target time-set.target. Sep 6 01:19:58.854347 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 01:19:58.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.877921 systemd-resolved[1417]: Positive Trust Anchors: Sep 6 01:19:58.878280 systemd-resolved[1417]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 01:19:58.878311 systemd-resolved[1417]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 01:19:58.934065 systemd-resolved[1417]: Using system hostname 'ci-3510.3.8-n-4d72badcbe'. Sep 6 01:19:58.935851 systemd[1]: Started systemd-resolved.service. Sep 6 01:19:58.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.941812 systemd[1]: Reached target network.target. Sep 6 01:19:58.947203 systemd[1]: Reached target network-online.target. Sep 6 01:19:58.953185 systemd[1]: Reached target nss-lookup.target. Sep 6 01:19:59.054000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 01:19:59.054000 audit[1434]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc8e7a920 a2=420 a3=0 items=0 ppid=1413 pid=1434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:19:59.054000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 01:19:59.069922 augenrules[1434]: No rules Sep 6 01:19:59.070970 systemd[1]: Finished audit-rules.service. Sep 6 01:19:59.136097 systemd-timesyncd[1418]: Contacted time server 139.94.144.123:123 (0.flatcar.pool.ntp.org). Sep 6 01:19:59.136516 systemd-timesyncd[1418]: Initial clock synchronization to Sat 2025-09-06 01:19:59.138648 UTC. Sep 6 01:20:03.791617 ldconfig[1305]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 01:20:03.799223 systemd[1]: Finished ldconfig.service. Sep 6 01:20:03.805205 systemd[1]: Starting systemd-update-done.service... Sep 6 01:20:03.838897 systemd[1]: Finished systemd-update-done.service. Sep 6 01:20:03.845249 systemd[1]: Reached target sysinit.target. Sep 6 01:20:03.850201 systemd[1]: Started motdgen.path. Sep 6 01:20:03.854463 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 01:20:03.861268 systemd[1]: Started logrotate.timer. Sep 6 01:20:03.865456 systemd[1]: Started mdadm.timer. Sep 6 01:20:03.869353 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 01:20:03.874625 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 01:20:03.874659 systemd[1]: Reached target paths.target. Sep 6 01:20:03.878903 systemd[1]: Reached target timers.target. Sep 6 01:20:03.883942 systemd[1]: Listening on dbus.socket. Sep 6 01:20:03.889254 systemd[1]: Starting docker.socket... Sep 6 01:20:03.895613 systemd[1]: Listening on sshd.socket. Sep 6 01:20:03.899945 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:20:03.900463 systemd[1]: Listening on docker.socket. Sep 6 01:20:03.905046 systemd[1]: Reached target sockets.target. Sep 6 01:20:03.909493 systemd[1]: Reached target basic.target. Sep 6 01:20:03.913628 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 01:20:03.913656 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 01:20:03.914734 systemd[1]: Starting containerd.service... Sep 6 01:20:03.919457 systemd[1]: Starting dbus.service... Sep 6 01:20:03.924288 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 01:20:03.929947 systemd[1]: Starting extend-filesystems.service... Sep 6 01:20:03.934561 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 01:20:03.935778 systemd[1]: Starting kubelet.service... Sep 6 01:20:03.940573 systemd[1]: Starting motdgen.service... Sep 6 01:20:03.945176 systemd[1]: Started nvidia.service. Sep 6 01:20:03.950801 systemd[1]: Starting prepare-helm.service... Sep 6 01:20:03.956169 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 01:20:03.961415 systemd[1]: Starting sshd-keygen.service... Sep 6 01:20:03.967889 systemd[1]: Starting systemd-logind.service... Sep 6 01:20:03.973797 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:20:03.973872 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 01:20:03.974332 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 01:20:03.974973 systemd[1]: Starting update-engine.service... Sep 6 01:20:03.979901 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 01:20:03.989702 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 01:20:03.989871 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 01:20:04.014906 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 01:20:04.015089 systemd[1]: Finished motdgen.service. Sep 6 01:20:04.037601 extend-filesystems[1445]: Found loop1 Sep 6 01:20:04.037601 extend-filesystems[1445]: Found sda Sep 6 01:20:04.037601 extend-filesystems[1445]: Found sda1 Sep 6 01:20:04.037601 extend-filesystems[1445]: Found sda2 Sep 6 01:20:04.037601 extend-filesystems[1445]: Found sda3 Sep 6 01:20:04.065456 systemd-logind[1457]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 6 01:20:04.100425 env[1468]: time="2025-09-06T01:20:04.077188996Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 01:20:04.100552 extend-filesystems[1445]: Found usr Sep 6 01:20:04.100552 extend-filesystems[1445]: Found sda4 Sep 6 01:20:04.100552 extend-filesystems[1445]: Found sda6 Sep 6 01:20:04.100552 extend-filesystems[1445]: Found sda7 Sep 6 01:20:04.100552 extend-filesystems[1445]: Found sda9 Sep 6 01:20:04.100552 extend-filesystems[1445]: Checking size of /dev/sda9 Sep 6 01:20:04.177851 jq[1461]: true Sep 6 01:20:04.177971 jq[1444]: false Sep 6 01:20:04.066296 systemd-logind[1457]: New seat seat0. Sep 6 01:20:04.178122 extend-filesystems[1445]: Old size kept for /dev/sda9 Sep 6 01:20:04.178122 extend-filesystems[1445]: Found sr0 Sep 6 01:20:04.207591 env[1468]: time="2025-09-06T01:20:04.105681210Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 01:20:04.207591 env[1468]: time="2025-09-06T01:20:04.105832510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:20:04.207591 env[1468]: time="2025-09-06T01:20:04.119944840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:20:04.207591 env[1468]: time="2025-09-06T01:20:04.119987845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:20:04.207591 env[1468]: time="2025-09-06T01:20:04.120820472Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:20:04.207591 env[1468]: time="2025-09-06T01:20:04.120843275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 01:20:04.207591 env[1468]: time="2025-09-06T01:20:04.120856997Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 01:20:04.207591 env[1468]: time="2025-09-06T01:20:04.120868878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 01:20:04.207591 env[1468]: time="2025-09-06T01:20:04.120958610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:20:04.207591 env[1468]: time="2025-09-06T01:20:04.121329538Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:20:04.207883 jq[1481]: true Sep 6 01:20:04.070874 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 01:20:04.214170 tar[1465]: linux-arm64/LICENSE Sep 6 01:20:04.214170 tar[1465]: linux-arm64/helm Sep 6 01:20:04.214387 env[1468]: time="2025-09-06T01:20:04.121547886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:20:04.214387 env[1468]: time="2025-09-06T01:20:04.121566968Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 01:20:04.214387 env[1468]: time="2025-09-06T01:20:04.121623615Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 01:20:04.214387 env[1468]: time="2025-09-06T01:20:04.121635257Z" level=info msg="metadata content store policy set" policy=shared Sep 6 01:20:04.214387 env[1468]: time="2025-09-06T01:20:04.171807492Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 01:20:04.214387 env[1468]: time="2025-09-06T01:20:04.171882262Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 01:20:04.214387 env[1468]: time="2025-09-06T01:20:04.171897584Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 01:20:04.214387 env[1468]: time="2025-09-06T01:20:04.171940670Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 01:20:04.214387 env[1468]: time="2025-09-06T01:20:04.171957792Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 01:20:04.214387 env[1468]: time="2025-09-06T01:20:04.171972554Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 01:20:04.214387 env[1468]: time="2025-09-06T01:20:04.171985875Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 01:20:04.214387 env[1468]: time="2025-09-06T01:20:04.172478178Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 01:20:04.214387 env[1468]: time="2025-09-06T01:20:04.172500541Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 01:20:04.211781 dbus-daemon[1443]: [system] SELinux support is enabled Sep 6 01:20:04.071034 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 01:20:04.214991 env[1468]: time="2025-09-06T01:20:04.172513503Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 01:20:04.214991 env[1468]: time="2025-09-06T01:20:04.172525985Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 01:20:04.214991 env[1468]: time="2025-09-06T01:20:04.172538186Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 01:20:04.214991 env[1468]: time="2025-09-06T01:20:04.172683005Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 01:20:04.214991 env[1468]: time="2025-09-06T01:20:04.172772896Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 01:20:04.214991 env[1468]: time="2025-09-06T01:20:04.173084616Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 01:20:04.214991 env[1468]: time="2025-09-06T01:20:04.173129702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 01:20:04.214991 env[1468]: time="2025-09-06T01:20:04.173146064Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 01:20:04.214991 env[1468]: time="2025-09-06T01:20:04.173216753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 01:20:04.214991 env[1468]: time="2025-09-06T01:20:04.173231195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 01:20:04.214991 env[1468]: time="2025-09-06T01:20:04.173242557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 01:20:04.214991 env[1468]: time="2025-09-06T01:20:04.173315806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 01:20:04.214991 env[1468]: time="2025-09-06T01:20:04.173328408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 01:20:04.214991 env[1468]: time="2025-09-06T01:20:04.173339729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 01:20:04.129408 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 01:20:04.215371 env[1468]: time="2025-09-06T01:20:04.173350130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 01:20:04.215371 env[1468]: time="2025-09-06T01:20:04.173361212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 01:20:04.215371 env[1468]: time="2025-09-06T01:20:04.173374893Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 01:20:04.215371 env[1468]: time="2025-09-06T01:20:04.173528473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 01:20:04.215371 env[1468]: time="2025-09-06T01:20:04.173558757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 01:20:04.215371 env[1468]: time="2025-09-06T01:20:04.173570999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 01:20:04.215371 env[1468]: time="2025-09-06T01:20:04.173582160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 01:20:04.215371 env[1468]: time="2025-09-06T01:20:04.173598802Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 01:20:04.215371 env[1468]: time="2025-09-06T01:20:04.173610084Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 01:20:04.215371 env[1468]: time="2025-09-06T01:20:04.173637567Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 01:20:04.215371 env[1468]: time="2025-09-06T01:20:04.173671932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 01:20:04.129577 systemd[1]: Finished extend-filesystems.service. Sep 6 01:20:04.175494 systemd[1]: Started containerd.service. Sep 6 01:20:04.215696 env[1468]: time="2025-09-06T01:20:04.173893480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 01:20:04.215696 env[1468]: time="2025-09-06T01:20:04.173956888Z" level=info msg="Connect containerd service" Sep 6 01:20:04.215696 env[1468]: time="2025-09-06T01:20:04.174016816Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 01:20:04.215696 env[1468]: time="2025-09-06T01:20:04.174670220Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 01:20:04.215696 env[1468]: time="2025-09-06T01:20:04.174757111Z" level=info msg="Start subscribing containerd event" Sep 6 01:20:04.215696 env[1468]: time="2025-09-06T01:20:04.174792555Z" level=info msg="Start recovering state" Sep 6 01:20:04.215696 env[1468]: time="2025-09-06T01:20:04.174855443Z" level=info msg="Start event monitor" Sep 6 01:20:04.215696 env[1468]: time="2025-09-06T01:20:04.174874766Z" level=info msg="Start snapshots syncer" Sep 6 01:20:04.215696 env[1468]: time="2025-09-06T01:20:04.174884047Z" level=info msg="Start cni network conf syncer for default" Sep 6 01:20:04.215696 env[1468]: time="2025-09-06T01:20:04.174891528Z" level=info msg="Start streaming server" Sep 6 01:20:04.215696 env[1468]: time="2025-09-06T01:20:04.175278898Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 01:20:04.215696 env[1468]: time="2025-09-06T01:20:04.175355468Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 01:20:04.215696 env[1468]: time="2025-09-06T01:20:04.199096953Z" level=info msg="containerd successfully booted in 0.127415s" Sep 6 01:20:04.211953 systemd[1]: Started dbus.service. Sep 6 01:20:04.231704 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Sep 6 01:20:04.218737 dbus-daemon[1443]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 6 01:20:04.217682 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 01:20:04.217704 systemd[1]: Reached target system-config.target. Sep 6 01:20:04.227563 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 01:20:04.227582 systemd[1]: Reached target user-config.target. Sep 6 01:20:04.234221 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 01:20:04.243798 systemd[1]: Started systemd-logind.service. Sep 6 01:20:04.297607 systemd[1]: nvidia.service: Deactivated successfully. Sep 6 01:20:04.641315 update_engine[1460]: I0906 01:20:04.628527 1460 main.cc:92] Flatcar Update Engine starting Sep 6 01:20:04.691514 systemd[1]: Started update-engine.service. Sep 6 01:20:04.698040 update_engine[1460]: I0906 01:20:04.697991 1460 update_check_scheduler.cc:74] Next update check in 11m45s Sep 6 01:20:04.701443 systemd[1]: Started locksmithd.service. Sep 6 01:20:04.889136 tar[1465]: linux-arm64/README.md Sep 6 01:20:04.894152 systemd[1]: Finished prepare-helm.service. Sep 6 01:20:04.978528 systemd[1]: Started kubelet.service. Sep 6 01:20:05.338187 kubelet[1550]: E0906 01:20:05.338132 1550 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:20:05.340305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:20:05.340436 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:20:05.577071 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 01:20:05.594471 systemd[1]: Finished sshd-keygen.service. Sep 6 01:20:05.600918 systemd[1]: Starting issuegen.service... Sep 6 01:20:05.605968 systemd[1]: Started waagent.service. Sep 6 01:20:05.612660 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 01:20:05.612867 systemd[1]: Finished issuegen.service. Sep 6 01:20:05.619943 systemd[1]: Starting systemd-user-sessions.service... Sep 6 01:20:05.672972 systemd[1]: Finished systemd-user-sessions.service. Sep 6 01:20:05.680352 systemd[1]: Started getty@tty1.service. Sep 6 01:20:05.686541 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 6 01:20:05.692406 systemd[1]: Reached target getty.target. Sep 6 01:20:05.697191 systemd[1]: Reached target multi-user.target. Sep 6 01:20:05.705844 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 01:20:05.719031 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 01:20:05.719231 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 01:20:05.727134 systemd[1]: Startup finished in 761ms (kernel) + 12.945s (initrd) + 19.023s (userspace) = 32.730s. Sep 6 01:20:06.054737 locksmithd[1546]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 01:20:06.267513 login[1574]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Sep 6 01:20:06.283168 login[1573]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 6 01:20:06.379876 systemd[1]: Created slice user-500.slice. Sep 6 01:20:06.380971 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 01:20:06.383150 systemd-logind[1457]: New session 1 of user core. Sep 6 01:20:06.413879 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 01:20:06.415366 systemd[1]: Starting user@500.service... Sep 6 01:20:06.444895 (systemd)[1578]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:06.614903 systemd[1578]: Queued start job for default target default.target. Sep 6 01:20:06.616148 systemd[1578]: Reached target paths.target. Sep 6 01:20:06.616274 systemd[1578]: Reached target sockets.target. Sep 6 01:20:06.616349 systemd[1578]: Reached target timers.target. Sep 6 01:20:06.616414 systemd[1578]: Reached target basic.target. Sep 6 01:20:06.616524 systemd[1578]: Reached target default.target. Sep 6 01:20:06.616592 systemd[1]: Started user@500.service. Sep 6 01:20:06.617250 systemd[1578]: Startup finished in 166ms. Sep 6 01:20:06.617481 systemd[1]: Started session-1.scope. Sep 6 01:20:07.269285 login[1574]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 6 01:20:07.273336 systemd-logind[1457]: New session 2 of user core. Sep 6 01:20:07.273749 systemd[1]: Started session-2.scope. Sep 6 01:20:11.469503 waagent[1571]: 2025-09-06T01:20:11.469397Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Sep 6 01:20:11.476970 waagent[1571]: 2025-09-06T01:20:11.476879Z INFO Daemon Daemon OS: flatcar 3510.3.8 Sep 6 01:20:11.481851 waagent[1571]: 2025-09-06T01:20:11.481773Z INFO Daemon Daemon Python: 3.9.16 Sep 6 01:20:11.486655 waagent[1571]: 2025-09-06T01:20:11.486572Z INFO Daemon Daemon Run daemon Sep 6 01:20:11.491160 waagent[1571]: 2025-09-06T01:20:11.491077Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Sep 6 01:20:11.508963 waagent[1571]: 2025-09-06T01:20:11.508830Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 6 01:20:11.525317 waagent[1571]: 2025-09-06T01:20:11.525169Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 6 01:20:11.537178 waagent[1571]: 2025-09-06T01:20:11.537054Z INFO Daemon Daemon cloud-init is enabled: False Sep 6 01:20:11.543100 waagent[1571]: 2025-09-06T01:20:11.543006Z INFO Daemon Daemon Using waagent for provisioning Sep 6 01:20:11.549799 waagent[1571]: 2025-09-06T01:20:11.549715Z INFO Daemon Daemon Activate resource disk Sep 6 01:20:11.555804 waagent[1571]: 2025-09-06T01:20:11.555721Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 6 01:20:11.572172 waagent[1571]: 2025-09-06T01:20:11.572071Z INFO Daemon Daemon Found device: None Sep 6 01:20:11.578221 waagent[1571]: 2025-09-06T01:20:11.578135Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 6 01:20:11.587851 waagent[1571]: 2025-09-06T01:20:11.587765Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 6 01:20:11.601744 waagent[1571]: 2025-09-06T01:20:11.601674Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 6 01:20:11.608214 waagent[1571]: 2025-09-06T01:20:11.608139Z INFO Daemon Daemon Running default provisioning handler Sep 6 01:20:11.621614 waagent[1571]: 2025-09-06T01:20:11.621473Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 6 01:20:11.639062 waagent[1571]: 2025-09-06T01:20:11.638924Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 6 01:20:11.650585 waagent[1571]: 2025-09-06T01:20:11.650476Z INFO Daemon Daemon cloud-init is enabled: False Sep 6 01:20:11.657443 waagent[1571]: 2025-09-06T01:20:11.657345Z INFO Daemon Daemon Copying ovf-env.xml Sep 6 01:20:11.728099 waagent[1571]: 2025-09-06T01:20:11.727679Z INFO Daemon Daemon Successfully mounted dvd Sep 6 01:20:11.824333 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 6 01:20:11.853409 waagent[1571]: 2025-09-06T01:20:11.853265Z INFO Daemon Daemon Detect protocol endpoint Sep 6 01:20:11.859424 waagent[1571]: 2025-09-06T01:20:11.859338Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 6 01:20:11.865711 waagent[1571]: 2025-09-06T01:20:11.865633Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 6 01:20:11.873537 waagent[1571]: 2025-09-06T01:20:11.873457Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 6 01:20:11.880344 waagent[1571]: 2025-09-06T01:20:11.880272Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 6 01:20:11.886304 waagent[1571]: 2025-09-06T01:20:11.886235Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 6 01:20:11.987604 waagent[1571]: 2025-09-06T01:20:11.987474Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 6 01:20:11.994954 waagent[1571]: 2025-09-06T01:20:11.994907Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 6 01:20:12.001169 waagent[1571]: 2025-09-06T01:20:12.001089Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 6 01:20:12.811063 waagent[1571]: 2025-09-06T01:20:12.810899Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 6 01:20:12.827197 waagent[1571]: 2025-09-06T01:20:12.827089Z INFO Daemon Daemon Forcing an update of the goal state.. Sep 6 01:20:12.833650 waagent[1571]: 2025-09-06T01:20:12.833578Z INFO Daemon Daemon Fetching goal state [incarnation 1] Sep 6 01:20:13.013274 waagent[1571]: 2025-09-06T01:20:13.013099Z INFO Daemon Daemon Found private key matching thumbprint B24E03F20B454BA7FF7C416E45B2141CC635C8B6 Sep 6 01:20:13.022883 waagent[1571]: 2025-09-06T01:20:13.022797Z INFO Daemon Daemon Fetch goal state completed Sep 6 01:20:13.094532 waagent[1571]: 2025-09-06T01:20:13.094445Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 6f8f97fd-0350-4982-8b89-9632d4d4868e New eTag: 17992896523715505854] Sep 6 01:20:13.106364 waagent[1571]: 2025-09-06T01:20:13.106284Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Sep 6 01:20:13.122452 waagent[1571]: 2025-09-06T01:20:13.122365Z INFO Daemon Daemon Starting provisioning Sep 6 01:20:13.128310 waagent[1571]: 2025-09-06T01:20:13.128236Z INFO Daemon Daemon Handle ovf-env.xml. Sep 6 01:20:13.134218 waagent[1571]: 2025-09-06T01:20:13.134143Z INFO Daemon Daemon Set hostname [ci-3510.3.8-n-4d72badcbe] Sep 6 01:20:13.183489 waagent[1571]: 2025-09-06T01:20:13.183345Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-n-4d72badcbe] Sep 6 01:20:13.192783 waagent[1571]: 2025-09-06T01:20:13.192693Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 6 01:20:13.200557 waagent[1571]: 2025-09-06T01:20:13.200486Z INFO Daemon Daemon Primary interface is [eth0] Sep 6 01:20:13.217997 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Sep 6 01:20:13.218178 systemd[1]: Stopped systemd-networkd-wait-online.service. Sep 6 01:20:13.218236 systemd[1]: Stopping systemd-networkd-wait-online.service... Sep 6 01:20:13.218474 systemd[1]: Stopping systemd-networkd.service... Sep 6 01:20:13.222157 systemd-networkd[1245]: eth0: DHCPv6 lease lost Sep 6 01:20:13.223488 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 01:20:13.223661 systemd[1]: Stopped systemd-networkd.service. Sep 6 01:20:13.225826 systemd[1]: Starting systemd-networkd.service... Sep 6 01:20:13.255734 systemd-networkd[1620]: enP15816s1: Link UP Sep 6 01:20:13.255747 systemd-networkd[1620]: enP15816s1: Gained carrier Sep 6 01:20:13.256857 systemd-networkd[1620]: eth0: Link UP Sep 6 01:20:13.256867 systemd-networkd[1620]: eth0: Gained carrier Sep 6 01:20:13.257235 systemd-networkd[1620]: lo: Link UP Sep 6 01:20:13.257244 systemd-networkd[1620]: lo: Gained carrier Sep 6 01:20:13.257494 systemd-networkd[1620]: eth0: Gained IPv6LL Sep 6 01:20:13.258771 systemd-networkd[1620]: Enumeration completed Sep 6 01:20:13.258911 systemd[1]: Started systemd-networkd.service. Sep 6 01:20:13.260515 systemd-networkd[1620]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:20:13.260605 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 01:20:13.267814 waagent[1571]: 2025-09-06T01:20:13.267645Z INFO Daemon Daemon Create user account if not exists Sep 6 01:20:13.273910 waagent[1571]: 2025-09-06T01:20:13.273826Z INFO Daemon Daemon User core already exists, skip useradd Sep 6 01:20:13.281156 waagent[1571]: 2025-09-06T01:20:13.281062Z INFO Daemon Daemon Configure sudoer Sep 6 01:20:13.281257 systemd-networkd[1620]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 6 01:20:13.287472 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 01:20:13.292896 waagent[1571]: 2025-09-06T01:20:13.288475Z INFO Daemon Daemon Configure sshd Sep 6 01:20:13.293306 waagent[1571]: 2025-09-06T01:20:13.293230Z INFO Daemon Daemon Deploy ssh public key. Sep 6 01:20:14.445718 waagent[1571]: 2025-09-06T01:20:14.445618Z INFO Daemon Daemon Provisioning complete Sep 6 01:20:14.470593 waagent[1571]: 2025-09-06T01:20:14.470524Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 6 01:20:14.477795 waagent[1571]: 2025-09-06T01:20:14.477713Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 6 01:20:14.489179 waagent[1571]: 2025-09-06T01:20:14.489070Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Sep 6 01:20:14.796493 waagent[1626]: 2025-09-06T01:20:14.796396Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Sep 6 01:20:14.797293 waagent[1626]: 2025-09-06T01:20:14.797235Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:20:14.797439 waagent[1626]: 2025-09-06T01:20:14.797390Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:20:14.809949 waagent[1626]: 2025-09-06T01:20:14.809867Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Sep 6 01:20:14.810145 waagent[1626]: 2025-09-06T01:20:14.810078Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Sep 6 01:20:14.870490 waagent[1626]: 2025-09-06T01:20:14.870334Z INFO ExtHandler ExtHandler Found private key matching thumbprint B24E03F20B454BA7FF7C416E45B2141CC635C8B6 Sep 6 01:20:14.870834 waagent[1626]: 2025-09-06T01:20:14.870750Z INFO ExtHandler ExtHandler Fetch goal state completed Sep 6 01:20:14.885593 waagent[1626]: 2025-09-06T01:20:14.885537Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 7f3a9aee-74c2-408d-9b44-eb85ed8e579e New eTag: 17992896523715505854] Sep 6 01:20:14.886192 waagent[1626]: 2025-09-06T01:20:14.886132Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Sep 6 01:20:14.938979 waagent[1626]: 2025-09-06T01:20:14.938817Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 6 01:20:14.961199 waagent[1626]: 2025-09-06T01:20:14.961098Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1626 Sep 6 01:20:14.965040 waagent[1626]: 2025-09-06T01:20:14.964969Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 6 01:20:14.966354 waagent[1626]: 2025-09-06T01:20:14.966298Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 6 01:20:15.069422 waagent[1626]: 2025-09-06T01:20:15.069306Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 6 01:20:15.069798 waagent[1626]: 2025-09-06T01:20:15.069737Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 6 01:20:15.077532 waagent[1626]: 2025-09-06T01:20:15.077476Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 6 01:20:15.078272 waagent[1626]: 2025-09-06T01:20:15.078210Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 6 01:20:15.079557 waagent[1626]: 2025-09-06T01:20:15.079493Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Sep 6 01:20:15.081043 waagent[1626]: 2025-09-06T01:20:15.080973Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 6 01:20:15.081347 waagent[1626]: 2025-09-06T01:20:15.081277Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:20:15.082068 waagent[1626]: 2025-09-06T01:20:15.081994Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:20:15.082687 waagent[1626]: 2025-09-06T01:20:15.082621Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 6 01:20:15.083011 waagent[1626]: 2025-09-06T01:20:15.082951Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 6 01:20:15.083011 waagent[1626]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 6 01:20:15.083011 waagent[1626]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 6 01:20:15.083011 waagent[1626]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 6 01:20:15.083011 waagent[1626]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:20:15.083011 waagent[1626]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:20:15.083011 waagent[1626]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:20:15.085297 waagent[1626]: 2025-09-06T01:20:15.085098Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 6 01:20:15.085845 waagent[1626]: 2025-09-06T01:20:15.085762Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:20:15.086445 waagent[1626]: 2025-09-06T01:20:15.086376Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:20:15.087055 waagent[1626]: 2025-09-06T01:20:15.086982Z INFO EnvHandler ExtHandler Configure routes Sep 6 01:20:15.087320 waagent[1626]: 2025-09-06T01:20:15.087252Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 6 01:20:15.087424 waagent[1626]: 2025-09-06T01:20:15.087365Z INFO EnvHandler ExtHandler Gateway:None Sep 6 01:20:15.087614 waagent[1626]: 2025-09-06T01:20:15.087549Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 6 01:20:15.087824 waagent[1626]: 2025-09-06T01:20:15.087761Z INFO EnvHandler ExtHandler Routes:None Sep 6 01:20:15.088816 waagent[1626]: 2025-09-06T01:20:15.088489Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 6 01:20:15.088993 waagent[1626]: 2025-09-06T01:20:15.088916Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 6 01:20:15.090089 waagent[1626]: 2025-09-06T01:20:15.090020Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 6 01:20:15.101650 waagent[1626]: 2025-09-06T01:20:15.101579Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Sep 6 01:20:15.102434 waagent[1626]: 2025-09-06T01:20:15.102387Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 6 01:20:15.103490 waagent[1626]: 2025-09-06T01:20:15.103436Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Sep 6 01:20:15.127345 waagent[1626]: 2025-09-06T01:20:15.127250Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1620' Sep 6 01:20:15.150042 waagent[1626]: 2025-09-06T01:20:15.149976Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Sep 6 01:20:15.202585 waagent[1626]: 2025-09-06T01:20:15.202451Z INFO MonitorHandler ExtHandler Network interfaces: Sep 6 01:20:15.202585 waagent[1626]: Executing ['ip', '-a', '-o', 'link']: Sep 6 01:20:15.202585 waagent[1626]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 6 01:20:15.202585 waagent[1626]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f9:3e:dc brd ff:ff:ff:ff:ff:ff Sep 6 01:20:15.202585 waagent[1626]: 3: enP15816s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f9:3e:dc brd ff:ff:ff:ff:ff:ff\ altname enP15816p0s2 Sep 6 01:20:15.202585 waagent[1626]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 6 01:20:15.202585 waagent[1626]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 6 01:20:15.202585 waagent[1626]: 2: eth0 inet 10.200.20.15/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 6 01:20:15.202585 waagent[1626]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 6 01:20:15.202585 waagent[1626]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 6 01:20:15.202585 waagent[1626]: 2: eth0 inet6 fe80::20d:3aff:fef9:3edc/64 scope link \ valid_lft forever preferred_lft forever Sep 6 01:20:15.465653 waagent[1626]: 2025-09-06T01:20:15.465461Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Sep 6 01:20:15.468930 waagent[1626]: 2025-09-06T01:20:15.468788Z INFO EnvHandler ExtHandler Firewall rules: Sep 6 01:20:15.468930 waagent[1626]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:20:15.468930 waagent[1626]: pkts bytes target prot opt in out source destination Sep 6 01:20:15.468930 waagent[1626]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:20:15.468930 waagent[1626]: pkts bytes target prot opt in out source destination Sep 6 01:20:15.468930 waagent[1626]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:20:15.468930 waagent[1626]: pkts bytes target prot opt in out source destination Sep 6 01:20:15.468930 waagent[1626]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 6 01:20:15.468930 waagent[1626]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 6 01:20:15.470839 waagent[1626]: 2025-09-06T01:20:15.470789Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 6 01:20:15.488648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 01:20:15.488829 systemd[1]: Stopped kubelet.service. Sep 6 01:20:15.490217 systemd[1]: Starting kubelet.service... Sep 6 01:20:15.517147 waagent[1626]: 2025-09-06T01:20:15.517054Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.14.0.1 -- exiting Sep 6 01:20:15.585608 systemd[1]: Started kubelet.service. Sep 6 01:20:15.717346 kubelet[1668]: E0906 01:20:15.717243 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:20:15.720405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:20:15.720528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:20:16.494147 waagent[1571]: 2025-09-06T01:20:16.493757Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Sep 6 01:20:16.499365 waagent[1571]: 2025-09-06T01:20:16.499304Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.14.0.1 to be the latest agent Sep 6 01:20:17.780678 waagent[1673]: 2025-09-06T01:20:17.780586Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.14.0.1) Sep 6 01:20:17.781756 waagent[1673]: 2025-09-06T01:20:17.781699Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Sep 6 01:20:17.782007 waagent[1673]: 2025-09-06T01:20:17.781960Z INFO ExtHandler ExtHandler Python: 3.9.16 Sep 6 01:20:17.782267 waagent[1673]: 2025-09-06T01:20:17.782220Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 6 01:20:17.796297 waagent[1673]: 2025-09-06T01:20:17.796193Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 6 01:20:17.796838 waagent[1673]: 2025-09-06T01:20:17.796787Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:20:17.797136 waagent[1673]: 2025-09-06T01:20:17.797060Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:20:17.797459 waagent[1673]: 2025-09-06T01:20:17.797409Z INFO ExtHandler ExtHandler Initializing the goal state... Sep 6 01:20:17.811506 waagent[1673]: 2025-09-06T01:20:17.811424Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 6 01:20:17.823879 waagent[1673]: 2025-09-06T01:20:17.823818Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 6 01:20:17.825201 waagent[1673]: 2025-09-06T01:20:17.825144Z INFO ExtHandler Sep 6 01:20:17.825479 waagent[1673]: 2025-09-06T01:20:17.825430Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f756e72e-0c07-487a-9966-41526422ad0c eTag: 17992896523715505854 source: Fabric] Sep 6 01:20:17.826377 waagent[1673]: 2025-09-06T01:20:17.826323Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 6 01:20:17.827744 waagent[1673]: 2025-09-06T01:20:17.827687Z INFO ExtHandler Sep 6 01:20:17.827985 waagent[1673]: 2025-09-06T01:20:17.827939Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 6 01:20:17.835666 waagent[1673]: 2025-09-06T01:20:17.835617Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 6 01:20:17.836359 waagent[1673]: 2025-09-06T01:20:17.836311Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 6 01:20:17.859726 waagent[1673]: 2025-09-06T01:20:17.859660Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Sep 6 01:20:17.927446 waagent[1673]: 2025-09-06T01:20:17.927301Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B24E03F20B454BA7FF7C416E45B2141CC635C8B6', 'hasPrivateKey': True} Sep 6 01:20:17.929182 waagent[1673]: 2025-09-06T01:20:17.929087Z INFO ExtHandler Fetch goal state from WireServer completed Sep 6 01:20:17.930265 waagent[1673]: 2025-09-06T01:20:17.930209Z INFO ExtHandler ExtHandler Goal state initialization completed. Sep 6 01:20:17.951926 waagent[1673]: 2025-09-06T01:20:17.951805Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Sep 6 01:20:17.961055 waagent[1673]: 2025-09-06T01:20:17.960907Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 6 01:20:17.965283 waagent[1673]: 2025-09-06T01:20:17.965177Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Sep 6 01:20:17.965673 waagent[1673]: 2025-09-06T01:20:17.965624Z INFO ExtHandler ExtHandler Checking state of the firewall Sep 6 01:20:18.005817 waagent[1673]: 2025-09-06T01:20:18.005681Z WARNING ExtHandler ExtHandler The firewall rules for Azure Fabric are not setup correctly (the environment thread will fix it): The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Current state: Sep 6 01:20:18.005817 waagent[1673]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:20:18.005817 waagent[1673]: pkts bytes target prot opt in out source destination Sep 6 01:20:18.005817 waagent[1673]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:20:18.005817 waagent[1673]: pkts bytes target prot opt in out source destination Sep 6 01:20:18.005817 waagent[1673]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:20:18.005817 waagent[1673]: pkts bytes target prot opt in out source destination Sep 6 01:20:18.005817 waagent[1673]: 81 9209 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 6 01:20:18.005817 waagent[1673]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 6 01:20:18.007388 waagent[1673]: 2025-09-06T01:20:18.007329Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Sep 6 01:20:18.010495 waagent[1673]: 2025-09-06T01:20:18.010380Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Sep 6 01:20:18.010908 waagent[1673]: 2025-09-06T01:20:18.010858Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 6 01:20:18.011478 waagent[1673]: 2025-09-06T01:20:18.011422Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 6 01:20:18.020283 waagent[1673]: 2025-09-06T01:20:18.020221Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 6 01:20:18.021065 waagent[1673]: 2025-09-06T01:20:18.021012Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 6 01:20:18.029804 waagent[1673]: 2025-09-06T01:20:18.029728Z INFO ExtHandler ExtHandler WALinuxAgent-2.14.0.1 running as process 1673 Sep 6 01:20:18.033586 waagent[1673]: 2025-09-06T01:20:18.033458Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 6 01:20:18.034653 waagent[1673]: 2025-09-06T01:20:18.034597Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Sep 6 01:20:18.035722 waagent[1673]: 2025-09-06T01:20:18.035668Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 6 01:20:18.038704 waagent[1673]: 2025-09-06T01:20:18.038650Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Sep 6 01:20:18.039208 waagent[1673]: 2025-09-06T01:20:18.039154Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 6 01:20:18.040750 waagent[1673]: 2025-09-06T01:20:18.040687Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 6 01:20:18.041054 waagent[1673]: 2025-09-06T01:20:18.040988Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:20:18.041685 waagent[1673]: 2025-09-06T01:20:18.041609Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:20:18.042325 waagent[1673]: 2025-09-06T01:20:18.042257Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 6 01:20:18.042990 waagent[1673]: 2025-09-06T01:20:18.042913Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 6 01:20:18.044126 waagent[1673]: 2025-09-06T01:20:18.043936Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 6 01:20:18.044311 waagent[1673]: 2025-09-06T01:20:18.044238Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 6 01:20:18.044652 waagent[1673]: 2025-09-06T01:20:18.044583Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 6 01:20:18.044652 waagent[1673]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 6 01:20:18.044652 waagent[1673]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 6 01:20:18.044652 waagent[1673]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 6 01:20:18.044652 waagent[1673]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:20:18.044652 waagent[1673]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:20:18.044652 waagent[1673]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:20:18.045468 waagent[1673]: 2025-09-06T01:20:18.045403Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:20:18.048101 waagent[1673]: 2025-09-06T01:20:18.047933Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 6 01:20:18.048640 waagent[1673]: 2025-09-06T01:20:18.048571Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 6 01:20:18.048902 waagent[1673]: 2025-09-06T01:20:18.048829Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 6 01:20:18.049490 waagent[1673]: 2025-09-06T01:20:18.049411Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:20:18.051522 waagent[1673]: 2025-09-06T01:20:18.051445Z INFO EnvHandler ExtHandler Configure routes Sep 6 01:20:18.057990 waagent[1673]: 2025-09-06T01:20:18.057871Z INFO EnvHandler ExtHandler Gateway:None Sep 6 01:20:18.058560 waagent[1673]: 2025-09-06T01:20:18.058498Z INFO EnvHandler ExtHandler Routes:None Sep 6 01:20:18.080213 waagent[1673]: 2025-09-06T01:20:18.080090Z INFO MonitorHandler ExtHandler Network interfaces: Sep 6 01:20:18.080213 waagent[1673]: Executing ['ip', '-a', '-o', 'link']: Sep 6 01:20:18.080213 waagent[1673]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 6 01:20:18.080213 waagent[1673]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f9:3e:dc brd ff:ff:ff:ff:ff:ff Sep 6 01:20:18.080213 waagent[1673]: 3: enP15816s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f9:3e:dc brd ff:ff:ff:ff:ff:ff\ altname enP15816p0s2 Sep 6 01:20:18.080213 waagent[1673]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 6 01:20:18.080213 waagent[1673]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 6 01:20:18.080213 waagent[1673]: 2: eth0 inet 10.200.20.15/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 6 01:20:18.080213 waagent[1673]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 6 01:20:18.080213 waagent[1673]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 6 01:20:18.080213 waagent[1673]: 2: eth0 inet6 fe80::20d:3aff:fef9:3edc/64 scope link \ valid_lft forever preferred_lft forever Sep 6 01:20:18.081128 waagent[1673]: 2025-09-06T01:20:18.081045Z INFO ExtHandler ExtHandler Downloading agent manifest Sep 6 01:20:18.119320 waagent[1673]: 2025-09-06T01:20:18.119234Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 6 01:20:18.140846 waagent[1673]: 2025-09-06T01:20:18.140752Z INFO ExtHandler ExtHandler Sep 6 01:20:18.143271 waagent[1673]: 2025-09-06T01:20:18.143050Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 30213933-b2a8-443d-8f0b-d1b5b90489b7 correlation 9a039a58-3dd4-473f-905f-d85c93321a89 created: 2025-09-06T01:18:53.103691Z] Sep 6 01:20:18.146133 waagent[1673]: 2025-09-06T01:20:18.146032Z WARNING EnvHandler ExtHandler The firewall is not configured correctly. The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Will reset it. Current state: Sep 6 01:20:18.146133 waagent[1673]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:20:18.146133 waagent[1673]: pkts bytes target prot opt in out source destination Sep 6 01:20:18.146133 waagent[1673]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:20:18.146133 waagent[1673]: pkts bytes target prot opt in out source destination Sep 6 01:20:18.146133 waagent[1673]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:20:18.146133 waagent[1673]: pkts bytes target prot opt in out source destination Sep 6 01:20:18.146133 waagent[1673]: 104 12011 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 6 01:20:18.146133 waagent[1673]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 6 01:20:18.148839 waagent[1673]: 2025-09-06T01:20:18.148763Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 6 01:20:18.156554 waagent[1673]: 2025-09-06T01:20:18.156463Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 15 ms] Sep 6 01:20:18.190221 waagent[1673]: 2025-09-06T01:20:18.190102Z INFO ExtHandler ExtHandler Looking for existing remote access users. Sep 6 01:20:18.197933 waagent[1673]: 2025-09-06T01:20:18.197845Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.14.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 71D299E0-D2CB-403C-8A7D-006A14632768;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Sep 6 01:20:18.219533 waagent[1673]: 2025-09-06T01:20:18.219418Z INFO EnvHandler ExtHandler The firewall was setup successfully: Sep 6 01:20:18.219533 waagent[1673]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:20:18.219533 waagent[1673]: pkts bytes target prot opt in out source destination Sep 6 01:20:18.219533 waagent[1673]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:20:18.219533 waagent[1673]: pkts bytes target prot opt in out source destination Sep 6 01:20:18.219533 waagent[1673]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:20:18.219533 waagent[1673]: pkts bytes target prot opt in out source destination Sep 6 01:20:18.219533 waagent[1673]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 6 01:20:18.219533 waagent[1673]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 6 01:20:18.219533 waagent[1673]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 6 01:20:25.738653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 01:20:25.738828 systemd[1]: Stopped kubelet.service. Sep 6 01:20:25.740224 systemd[1]: Starting kubelet.service... Sep 6 01:20:25.829174 systemd[1]: Started kubelet.service. Sep 6 01:20:25.958756 kubelet[1723]: E0906 01:20:25.958704 1723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:20:25.961153 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:20:25.961279 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:20:35.988740 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 6 01:20:35.988904 systemd[1]: Stopped kubelet.service. Sep 6 01:20:35.990263 systemd[1]: Starting kubelet.service... Sep 6 01:20:36.079749 systemd[1]: Started kubelet.service. Sep 6 01:20:36.202725 kubelet[1732]: E0906 01:20:36.202673 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:20:36.205200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:20:36.205321 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:20:40.674182 systemd[1]: Created slice system-sshd.slice. Sep 6 01:20:40.675853 systemd[1]: Started sshd@0-10.200.20.15:22-10.200.16.10:56882.service. Sep 6 01:20:41.270661 sshd[1739]: Accepted publickey for core from 10.200.16.10 port 56882 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:20:41.285838 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:41.289166 systemd-logind[1457]: New session 3 of user core. Sep 6 01:20:41.289921 systemd[1]: Started session-3.scope. Sep 6 01:20:41.690415 systemd[1]: Started sshd@1-10.200.20.15:22-10.200.16.10:56886.service. Sep 6 01:20:42.178373 sshd[1744]: Accepted publickey for core from 10.200.16.10 port 56886 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:20:42.179618 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:42.183383 systemd-logind[1457]: New session 4 of user core. Sep 6 01:20:42.183753 systemd[1]: Started session-4.scope. Sep 6 01:20:42.533719 sshd[1744]: pam_unix(sshd:session): session closed for user core Sep 6 01:20:42.536217 systemd[1]: sshd@1-10.200.20.15:22-10.200.16.10:56886.service: Deactivated successfully. Sep 6 01:20:42.536865 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 01:20:42.537407 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. Sep 6 01:20:42.538360 systemd-logind[1457]: Removed session 4. Sep 6 01:20:42.614519 systemd[1]: Started sshd@2-10.200.20.15:22-10.200.16.10:56888.service. Sep 6 01:20:43.064259 sshd[1750]: Accepted publickey for core from 10.200.16.10 port 56888 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:20:43.065583 sshd[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:43.069476 systemd-logind[1457]: New session 5 of user core. Sep 6 01:20:43.069848 systemd[1]: Started session-5.scope. Sep 6 01:20:43.380916 sshd[1750]: pam_unix(sshd:session): session closed for user core Sep 6 01:20:43.383730 systemd[1]: sshd@2-10.200.20.15:22-10.200.16.10:56888.service: Deactivated successfully. Sep 6 01:20:43.384393 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 01:20:43.384878 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. Sep 6 01:20:43.385653 systemd-logind[1457]: Removed session 5. Sep 6 01:20:43.448687 systemd[1]: Started sshd@3-10.200.20.15:22-10.200.16.10:56894.service. Sep 6 01:20:43.856082 sshd[1756]: Accepted publickey for core from 10.200.16.10 port 56894 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:20:43.857651 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:43.861792 systemd[1]: Started session-6.scope. Sep 6 01:20:43.863027 systemd-logind[1457]: New session 6 of user core. Sep 6 01:20:44.182738 sshd[1756]: pam_unix(sshd:session): session closed for user core Sep 6 01:20:44.185363 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 01:20:44.185930 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. Sep 6 01:20:44.186061 systemd[1]: sshd@3-10.200.20.15:22-10.200.16.10:56894.service: Deactivated successfully. Sep 6 01:20:44.187164 systemd-logind[1457]: Removed session 6. Sep 6 01:20:44.257244 systemd[1]: Started sshd@4-10.200.20.15:22-10.200.16.10:56900.service. Sep 6 01:20:44.510633 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 6 01:20:44.708579 sshd[1762]: Accepted publickey for core from 10.200.16.10 port 56900 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:20:44.709816 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:44.713651 systemd-logind[1457]: New session 7 of user core. Sep 6 01:20:44.714086 systemd[1]: Started session-7.scope. Sep 6 01:20:45.229817 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 01:20:45.230031 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 01:20:45.251268 systemd[1]: Starting docker.service... Sep 6 01:20:45.296762 env[1775]: time="2025-09-06T01:20:45.296712686Z" level=info msg="Starting up" Sep 6 01:20:45.297997 env[1775]: time="2025-09-06T01:20:45.297970017Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 01:20:45.297997 env[1775]: time="2025-09-06T01:20:45.297992578Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 01:20:45.297997 env[1775]: time="2025-09-06T01:20:45.298012458Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 01:20:45.298182 env[1775]: time="2025-09-06T01:20:45.298022858Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 01:20:45.299653 env[1775]: time="2025-09-06T01:20:45.299633352Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 01:20:45.299741 env[1775]: time="2025-09-06T01:20:45.299728393Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 01:20:45.299805 env[1775]: time="2025-09-06T01:20:45.299789714Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 01:20:45.299861 env[1775]: time="2025-09-06T01:20:45.299849154Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 01:20:45.382285 env[1775]: time="2025-09-06T01:20:45.382245064Z" level=info msg="Loading containers: start." Sep 6 01:20:45.549145 kernel: Initializing XFRM netlink socket Sep 6 01:20:45.568776 env[1775]: time="2025-09-06T01:20:45.568744161Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 01:20:45.695618 systemd-networkd[1620]: docker0: Link UP Sep 6 01:20:45.721303 env[1775]: time="2025-09-06T01:20:45.721256309Z" level=info msg="Loading containers: done." Sep 6 01:20:45.730063 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1154929728-merged.mount: Deactivated successfully. Sep 6 01:20:45.742910 env[1775]: time="2025-09-06T01:20:45.742868586Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 01:20:45.743323 env[1775]: time="2025-09-06T01:20:45.743304990Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 01:20:45.743504 env[1775]: time="2025-09-06T01:20:45.743489831Z" level=info msg="Daemon has completed initialization" Sep 6 01:20:45.777029 systemd[1]: Started docker.service. Sep 6 01:20:45.783743 env[1775]: time="2025-09-06T01:20:45.783667957Z" level=info msg="API listen on /run/docker.sock" Sep 6 01:20:46.238632 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 6 01:20:46.238814 systemd[1]: Stopped kubelet.service. Sep 6 01:20:46.240238 systemd[1]: Starting kubelet.service... Sep 6 01:20:46.393191 systemd[1]: Started kubelet.service. Sep 6 01:20:46.432166 kubelet[1892]: E0906 01:20:46.432125 1892 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:20:46.438824 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:20:46.438966 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:20:47.170035 env[1468]: time="2025-09-06T01:20:47.169993177Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 6 01:20:47.933450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2359392349.mount: Deactivated successfully. Sep 6 01:20:50.000197 update_engine[1460]: I0906 01:20:50.000149 1460 update_attempter.cc:509] Updating boot flags... Sep 6 01:20:50.286999 env[1468]: time="2025-09-06T01:20:50.286661516Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:20:50.295130 env[1468]: time="2025-09-06T01:20:50.294882410Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:20:50.299406 env[1468]: time="2025-09-06T01:20:50.299373079Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:20:50.304672 env[1468]: time="2025-09-06T01:20:50.304642114Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:20:50.305513 env[1468]: time="2025-09-06T01:20:50.305485480Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 6 01:20:50.307685 env[1468]: time="2025-09-06T01:20:50.307449893Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 6 01:20:52.557187 env[1468]: time="2025-09-06T01:20:52.557130502Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:20:52.568679 env[1468]: time="2025-09-06T01:20:52.568626848Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:20:52.574827 env[1468]: time="2025-09-06T01:20:52.574792964Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:20:52.580137 env[1468]: time="2025-09-06T01:20:52.580091475Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:20:52.581032 env[1468]: time="2025-09-06T01:20:52.581005200Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 6 01:20:52.582479 env[1468]: time="2025-09-06T01:20:52.582457208Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 6 01:20:54.764764 env[1468]: time="2025-09-06T01:20:54.764710033Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:20:54.772698 env[1468]: time="2025-09-06T01:20:54.772641553Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:20:54.778429 env[1468]: time="2025-09-06T01:20:54.778393703Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:20:54.783011 env[1468]: time="2025-09-06T01:20:54.782965726Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:20:54.783738 env[1468]: time="2025-09-06T01:20:54.783710410Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 6 01:20:54.784420 env[1468]: time="2025-09-06T01:20:54.784395373Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 6 01:20:56.235871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1672244718.mount: Deactivated successfully. Sep 6 01:20:56.488598 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 6 01:20:56.488763 systemd[1]: Stopped kubelet.service. Sep 6 01:20:56.490095 systemd[1]: Starting kubelet.service... Sep 6 01:20:56.580191 systemd[1]: Started kubelet.service. Sep 6 01:20:56.695966 kubelet[1972]: E0906 01:20:56.695916 1972 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:20:56.697455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:20:56.697590 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:20:57.160888 env[1468]: time="2025-09-06T01:20:57.160229271Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:20:57.530864 env[1468]: time="2025-09-06T01:20:57.530829985Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:20:57.582677 env[1468]: time="2025-09-06T01:20:57.582617242Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:20:57.632528 env[1468]: time="2025-09-06T01:20:57.632482052Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:20:57.633323 env[1468]: time="2025-09-06T01:20:57.632972814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 6 01:20:57.634294 env[1468]: time="2025-09-06T01:20:57.634262859Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 6 01:20:58.825019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1170548925.mount: Deactivated successfully. Sep 6 01:21:00.075788 env[1468]: time="2025-09-06T01:21:00.075740635Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:00.092697 env[1468]: time="2025-09-06T01:21:00.092655653Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:00.096001 env[1468]: time="2025-09-06T01:21:00.095972665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:00.100409 env[1468]: time="2025-09-06T01:21:00.100378880Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:00.101137 env[1468]: time="2025-09-06T01:21:00.101088242Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 6 01:21:00.101794 env[1468]: time="2025-09-06T01:21:00.101771925Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 01:21:00.655193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount802754803.mount: Deactivated successfully. Sep 6 01:21:00.683226 env[1468]: time="2025-09-06T01:21:00.683169614Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:00.692929 env[1468]: time="2025-09-06T01:21:00.692892088Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:00.699689 env[1468]: time="2025-09-06T01:21:00.699653911Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:00.705855 env[1468]: time="2025-09-06T01:21:00.705822572Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:00.706635 env[1468]: time="2025-09-06T01:21:00.706609975Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 6 01:21:00.708012 env[1468]: time="2025-09-06T01:21:00.707967060Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 6 01:21:01.292827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2633777497.mount: Deactivated successfully. Sep 6 01:21:04.310190 env[1468]: time="2025-09-06T01:21:04.310131823Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:04.317782 env[1468]: time="2025-09-06T01:21:04.317739160Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:04.322376 env[1468]: time="2025-09-06T01:21:04.322334162Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:04.327266 env[1468]: time="2025-09-06T01:21:04.327228482Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:04.328237 env[1468]: time="2025-09-06T01:21:04.328205314Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 6 01:21:06.738643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 6 01:21:06.738817 systemd[1]: Stopped kubelet.service. Sep 6 01:21:06.740207 systemd[1]: Starting kubelet.service... Sep 6 01:21:06.928866 systemd[1]: Started kubelet.service. Sep 6 01:21:06.974560 kubelet[2001]: E0906 01:21:06.974525 2001 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:21:06.976453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:21:06.976583 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:21:09.801171 systemd[1]: Stopped kubelet.service. Sep 6 01:21:09.803877 systemd[1]: Starting kubelet.service... Sep 6 01:21:09.845837 systemd[1]: Reloading. Sep 6 01:21:09.939138 /usr/lib/systemd/system-generators/torcx-generator[2034]: time="2025-09-06T01:21:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:21:09.939168 /usr/lib/systemd/system-generators/torcx-generator[2034]: time="2025-09-06T01:21:09Z" level=info msg="torcx already run" Sep 6 01:21:10.014090 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:21:10.014286 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:21:10.029877 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:21:10.128125 systemd[1]: Started kubelet.service. Sep 6 01:21:10.131606 systemd[1]: Stopping kubelet.service... Sep 6 01:21:10.132308 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 01:21:10.132481 systemd[1]: Stopped kubelet.service. Sep 6 01:21:10.134498 systemd[1]: Starting kubelet.service... Sep 6 01:21:10.308503 systemd[1]: Started kubelet.service. Sep 6 01:21:10.466070 kubelet[2100]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:21:10.466444 kubelet[2100]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 01:21:10.466498 kubelet[2100]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:21:10.466622 kubelet[2100]: I0906 01:21:10.466597 2100 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 01:21:11.199606 kubelet[2100]: I0906 01:21:11.199561 2100 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 6 01:21:11.199606 kubelet[2100]: I0906 01:21:11.199596 2100 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 01:21:11.199850 kubelet[2100]: I0906 01:21:11.199830 2100 server.go:956] "Client rotation is on, will bootstrap in background" Sep 6 01:21:11.219801 kubelet[2100]: I0906 01:21:11.219774 2100 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 01:21:11.220046 kubelet[2100]: E0906 01:21:11.220002 2100 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 6 01:21:11.231210 kubelet[2100]: E0906 01:21:11.231172 2100 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 01:21:11.231374 kubelet[2100]: I0906 01:21:11.231361 2100 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 01:21:11.234365 kubelet[2100]: I0906 01:21:11.234346 2100 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 01:21:11.234728 kubelet[2100]: I0906 01:21:11.234702 2100 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 01:21:11.234950 kubelet[2100]: I0906 01:21:11.234789 2100 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-4d72badcbe","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 01:21:11.235087 kubelet[2100]: I0906 01:21:11.235076 2100 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 01:21:11.235168 kubelet[2100]: I0906 01:21:11.235159 2100 container_manager_linux.go:303] "Creating device plugin manager" Sep 6 01:21:11.235332 kubelet[2100]: I0906 01:21:11.235321 2100 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:21:11.238307 kubelet[2100]: I0906 01:21:11.238287 2100 kubelet.go:480] "Attempting to sync node with API server" Sep 6 01:21:11.238449 kubelet[2100]: I0906 01:21:11.238437 2100 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 01:21:11.240814 kubelet[2100]: I0906 01:21:11.240794 2100 kubelet.go:386] "Adding apiserver pod source" Sep 6 01:21:11.240939 kubelet[2100]: I0906 01:21:11.240928 2100 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 01:21:11.244439 kubelet[2100]: E0906 01:21:11.244389 2100 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-4d72badcbe&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 6 01:21:11.244539 kubelet[2100]: I0906 01:21:11.244496 2100 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 01:21:11.245101 kubelet[2100]: I0906 01:21:11.245071 2100 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 6 01:21:11.245173 kubelet[2100]: W0906 01:21:11.245148 2100 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 01:21:11.249769 kubelet[2100]: I0906 01:21:11.249744 2100 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 01:21:11.249860 kubelet[2100]: I0906 01:21:11.249798 2100 server.go:1289] "Started kubelet" Sep 6 01:21:11.254352 kubelet[2100]: E0906 01:21:11.254320 2100 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 6 01:21:11.255495 kubelet[2100]: E0906 01:21:11.254507 2100 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.15:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-4d72badcbe.18628cd9b1fb99aa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-4d72badcbe,UID:ci-3510.3.8-n-4d72badcbe,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-4d72badcbe,},FirstTimestamp:2025-09-06 01:21:11.24976273 +0000 UTC m=+0.935812586,LastTimestamp:2025-09-06 01:21:11.24976273 +0000 UTC m=+0.935812586,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-4d72badcbe,}" Sep 6 01:21:11.257430 kubelet[2100]: E0906 01:21:11.257410 2100 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 01:21:11.257698 kubelet[2100]: I0906 01:21:11.257658 2100 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 01:21:11.258004 kubelet[2100]: I0906 01:21:11.257988 2100 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 01:21:11.258167 kubelet[2100]: I0906 01:21:11.258148 2100 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 01:21:11.258972 kubelet[2100]: I0906 01:21:11.258956 2100 server.go:317] "Adding debug handlers to kubelet server" Sep 6 01:21:11.261944 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 01:21:11.262086 kubelet[2100]: I0906 01:21:11.262064 2100 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 01:21:11.263681 kubelet[2100]: I0906 01:21:11.263661 2100 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 01:21:11.265186 kubelet[2100]: I0906 01:21:11.265163 2100 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 01:21:11.265410 kubelet[2100]: E0906 01:21:11.265382 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4d72badcbe\" not found" Sep 6 01:21:11.266320 kubelet[2100]: I0906 01:21:11.265686 2100 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 01:21:11.266320 kubelet[2100]: I0906 01:21:11.265748 2100 reconciler.go:26] "Reconciler: start to sync state" Sep 6 01:21:11.266320 kubelet[2100]: E0906 01:21:11.266066 2100 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 6 01:21:11.266320 kubelet[2100]: E0906 01:21:11.266159 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-4d72badcbe?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="200ms" Sep 6 01:21:11.266510 kubelet[2100]: I0906 01:21:11.266474 2100 factory.go:223] Registration of the systemd container factory successfully Sep 6 01:21:11.266944 kubelet[2100]: I0906 01:21:11.266543 2100 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 01:21:11.267576 kubelet[2100]: I0906 01:21:11.267560 2100 factory.go:223] Registration of the containerd container factory successfully Sep 6 01:21:11.338619 kubelet[2100]: I0906 01:21:11.338555 2100 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 6 01:21:11.340443 kubelet[2100]: I0906 01:21:11.340401 2100 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 6 01:21:11.340443 kubelet[2100]: I0906 01:21:11.340439 2100 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 6 01:21:11.340583 kubelet[2100]: I0906 01:21:11.340464 2100 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 01:21:11.340583 kubelet[2100]: I0906 01:21:11.340487 2100 kubelet.go:2436] "Starting kubelet main sync loop" Sep 6 01:21:11.340583 kubelet[2100]: E0906 01:21:11.340529 2100 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 01:21:11.342343 kubelet[2100]: E0906 01:21:11.342299 2100 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 6 01:21:11.366409 kubelet[2100]: E0906 01:21:11.366377 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4d72badcbe\" not found" Sep 6 01:21:11.394331 kubelet[2100]: I0906 01:21:11.394302 2100 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 01:21:11.394331 kubelet[2100]: I0906 01:21:11.394321 2100 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 01:21:11.394489 kubelet[2100]: I0906 01:21:11.394342 2100 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:21:11.398639 kubelet[2100]: I0906 01:21:11.398611 2100 policy_none.go:49] "None policy: Start" Sep 6 01:21:11.398639 kubelet[2100]: I0906 01:21:11.398641 2100 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 01:21:11.398747 kubelet[2100]: I0906 01:21:11.398653 2100 state_mem.go:35] "Initializing new in-memory state store" Sep 6 01:21:11.408467 systemd[1]: Created slice kubepods.slice. Sep 6 01:21:11.412767 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 01:21:11.415402 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 01:21:11.424924 kubelet[2100]: E0906 01:21:11.424500 2100 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 6 01:21:11.424924 kubelet[2100]: I0906 01:21:11.424688 2100 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 01:21:11.424924 kubelet[2100]: I0906 01:21:11.424700 2100 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 01:21:11.426019 kubelet[2100]: I0906 01:21:11.425097 2100 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 01:21:11.426641 kubelet[2100]: E0906 01:21:11.426588 2100 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 01:21:11.426718 kubelet[2100]: E0906 01:21:11.426658 2100 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-4d72badcbe\" not found" Sep 6 01:21:11.459451 systemd[1]: Created slice kubepods-burstable-pod096565ec8861228478fcd950e407b7e8.slice. Sep 6 01:21:11.465970 kubelet[2100]: E0906 01:21:11.465934 2100 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-4d72badcbe\" not found" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:11.466466 kubelet[2100]: I0906 01:21:11.466444 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff72a2baf55121cb9a229715959843ce-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-4d72badcbe\" (UID: \"ff72a2baf55121cb9a229715959843ce\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:11.466824 kubelet[2100]: I0906 01:21:11.466807 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff72a2baf55121cb9a229715959843ce-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-4d72badcbe\" (UID: \"ff72a2baf55121cb9a229715959843ce\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:11.466914 kubelet[2100]: I0906 01:21:11.466902 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c0593eb000502dfb9c1564f3c962831b-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-4d72badcbe\" (UID: \"c0593eb000502dfb9c1564f3c962831b\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:11.466998 kubelet[2100]: I0906 01:21:11.466985 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/096565ec8861228478fcd950e407b7e8-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-4d72badcbe\" (UID: \"096565ec8861228478fcd950e407b7e8\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:11.467078 kubelet[2100]: I0906 01:21:11.467063 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff72a2baf55121cb9a229715959843ce-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-4d72badcbe\" (UID: \"ff72a2baf55121cb9a229715959843ce\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:11.467175 kubelet[2100]: I0906 01:21:11.467164 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/096565ec8861228478fcd950e407b7e8-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-4d72badcbe\" (UID: \"096565ec8861228478fcd950e407b7e8\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:11.467343 kubelet[2100]: I0906 01:21:11.467327 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/096565ec8861228478fcd950e407b7e8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-4d72badcbe\" (UID: \"096565ec8861228478fcd950e407b7e8\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:11.467911 kubelet[2100]: I0906 01:21:11.467893 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff72a2baf55121cb9a229715959843ce-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-4d72badcbe\" (UID: \"ff72a2baf55121cb9a229715959843ce\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:11.468010 kubelet[2100]: I0906 01:21:11.467993 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ff72a2baf55121cb9a229715959843ce-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-4d72badcbe\" (UID: \"ff72a2baf55121cb9a229715959843ce\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:11.468170 kubelet[2100]: E0906 01:21:11.467585 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-4d72badcbe?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="400ms" Sep 6 01:21:11.470562 systemd[1]: Created slice kubepods-burstable-podff72a2baf55121cb9a229715959843ce.slice. Sep 6 01:21:11.472601 kubelet[2100]: E0906 01:21:11.472563 2100 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-4d72badcbe\" not found" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:11.485051 systemd[1]: Created slice kubepods-burstable-podc0593eb000502dfb9c1564f3c962831b.slice. Sep 6 01:21:11.486831 kubelet[2100]: E0906 01:21:11.486804 2100 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-4d72badcbe\" not found" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:11.527331 kubelet[2100]: I0906 01:21:11.527309 2100 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:11.527987 kubelet[2100]: E0906 01:21:11.527965 2100 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:11.730834 kubelet[2100]: I0906 01:21:11.730191 2100 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:11.731364 kubelet[2100]: E0906 01:21:11.731331 2100 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:11.767781 env[1468]: time="2025-09-06T01:21:11.767738168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-4d72badcbe,Uid:096565ec8861228478fcd950e407b7e8,Namespace:kube-system,Attempt:0,}" Sep 6 01:21:11.774391 env[1468]: time="2025-09-06T01:21:11.774163684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-4d72badcbe,Uid:ff72a2baf55121cb9a229715959843ce,Namespace:kube-system,Attempt:0,}" Sep 6 01:21:11.788359 env[1468]: time="2025-09-06T01:21:11.788313588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-4d72badcbe,Uid:c0593eb000502dfb9c1564f3c962831b,Namespace:kube-system,Attempt:0,}" Sep 6 01:21:11.868943 kubelet[2100]: E0906 01:21:11.868907 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-4d72badcbe?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="800ms" Sep 6 01:21:12.057468 kubelet[2100]: E0906 01:21:12.057420 2100 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 6 01:21:12.133649 kubelet[2100]: I0906 01:21:12.133620 2100 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:12.134228 kubelet[2100]: E0906 01:21:12.134201 2100 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:12.490036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2829425464.mount: Deactivated successfully. Sep 6 01:21:12.531012 env[1468]: time="2025-09-06T01:21:12.530957755Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:12.534429 env[1468]: time="2025-09-06T01:21:12.534393572Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:12.545921 env[1468]: time="2025-09-06T01:21:12.545883056Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:12.549886 env[1468]: time="2025-09-06T01:21:12.549840430Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:12.553368 env[1468]: time="2025-09-06T01:21:12.553337927Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:12.556642 env[1468]: time="2025-09-06T01:21:12.556599945Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:12.569587 env[1468]: time="2025-09-06T01:21:12.569549379Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:12.572954 env[1468]: time="2025-09-06T01:21:12.572909997Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:12.578202 env[1468]: time="2025-09-06T01:21:12.578168682Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:12.586492 env[1468]: time="2025-09-06T01:21:12.586450947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:12.595347 env[1468]: time="2025-09-06T01:21:12.595301209Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:12.606251 env[1468]: time="2025-09-06T01:21:12.606206297Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:12.612733 kubelet[2100]: E0906 01:21:12.612686 2100 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-4d72badcbe&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 6 01:21:12.658117 env[1468]: time="2025-09-06T01:21:12.649902008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:21:12.658117 env[1468]: time="2025-09-06T01:21:12.649941487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:21:12.658117 env[1468]: time="2025-09-06T01:21:12.649964407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:21:12.658117 env[1468]: time="2025-09-06T01:21:12.650227325Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f80ba3115fb523d6734378309476d139903a53db1a14e0fb3640f7278c033759 pid=2144 runtime=io.containerd.runc.v2 Sep 6 01:21:12.671003 systemd[1]: Started cri-containerd-f80ba3115fb523d6734378309476d139903a53db1a14e0fb3640f7278c033759.scope. Sep 6 01:21:12.675693 env[1468]: time="2025-09-06T01:21:12.674499765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:21:12.675693 env[1468]: time="2025-09-06T01:21:12.674567564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:21:12.675693 env[1468]: time="2025-09-06T01:21:12.674593924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:21:12.675693 env[1468]: time="2025-09-06T01:21:12.674820763Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f80e1595324b85a0d37a9daf1908c37d1056ca64e1f6329b55a8082d485864a3 pid=2172 runtime=io.containerd.runc.v2 Sep 6 01:21:12.676567 kubelet[2100]: E0906 01:21:12.676024 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-4d72badcbe?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="1.6s" Sep 6 01:21:12.690904 systemd[1]: Started cri-containerd-f80e1595324b85a0d37a9daf1908c37d1056ca64e1f6329b55a8082d485864a3.scope. Sep 6 01:21:12.695948 env[1468]: time="2025-09-06T01:21:12.695416906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:21:12.695948 env[1468]: time="2025-09-06T01:21:12.695462786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:21:12.695948 env[1468]: time="2025-09-06T01:21:12.695473226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:21:12.695948 env[1468]: time="2025-09-06T01:21:12.695588225Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ca9f6c7d37986ac98528a87b46c95c1b13c42b828357a1fb23d2a6bd143ee45 pid=2209 runtime=io.containerd.runc.v2 Sep 6 01:21:12.726967 env[1468]: time="2025-09-06T01:21:12.726916898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-4d72badcbe,Uid:096565ec8861228478fcd950e407b7e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f80ba3115fb523d6734378309476d139903a53db1a14e0fb3640f7278c033759\"" Sep 6 01:21:12.727821 systemd[1]: Started cri-containerd-6ca9f6c7d37986ac98528a87b46c95c1b13c42b828357a1fb23d2a6bd143ee45.scope. Sep 6 01:21:12.735772 kubelet[2100]: E0906 01:21:12.735727 2100 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 6 01:21:12.739479 env[1468]: time="2025-09-06T01:21:12.739424095Z" level=info msg="CreateContainer within sandbox \"f80ba3115fb523d6734378309476d139903a53db1a14e0fb3640f7278c033759\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 01:21:12.755899 kubelet[2100]: E0906 01:21:12.755795 2100 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 6 01:21:12.761263 env[1468]: time="2025-09-06T01:21:12.761225471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-4d72badcbe,Uid:ff72a2baf55121cb9a229715959843ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"f80e1595324b85a0d37a9daf1908c37d1056ca64e1f6329b55a8082d485864a3\"" Sep 6 01:21:12.768526 env[1468]: time="2025-09-06T01:21:12.768333024Z" level=info msg="CreateContainer within sandbox \"f80e1595324b85a0d37a9daf1908c37d1056ca64e1f6329b55a8082d485864a3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 01:21:12.780058 env[1468]: time="2025-09-06T01:21:12.780010787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-4d72badcbe,Uid:c0593eb000502dfb9c1564f3c962831b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ca9f6c7d37986ac98528a87b46c95c1b13c42b828357a1fb23d2a6bd143ee45\"" Sep 6 01:21:12.783474 env[1468]: time="2025-09-06T01:21:12.783426844Z" level=info msg="CreateContainer within sandbox \"f80ba3115fb523d6734378309476d139903a53db1a14e0fb3640f7278c033759\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9badf700a591f233cfbd1e9851110b048e54e0fbb44e6425720f2b03ed05a587\"" Sep 6 01:21:12.784120 env[1468]: time="2025-09-06T01:21:12.784071040Z" level=info msg="StartContainer for \"9badf700a591f233cfbd1e9851110b048e54e0fbb44e6425720f2b03ed05a587\"" Sep 6 01:21:12.789417 env[1468]: time="2025-09-06T01:21:12.789369965Z" level=info msg="CreateContainer within sandbox \"6ca9f6c7d37986ac98528a87b46c95c1b13c42b828357a1fb23d2a6bd143ee45\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 01:21:12.802398 systemd[1]: Started cri-containerd-9badf700a591f233cfbd1e9851110b048e54e0fbb44e6425720f2b03ed05a587.scope. Sep 6 01:21:12.828022 env[1468]: time="2025-09-06T01:21:12.827979109Z" level=info msg="CreateContainer within sandbox \"f80e1595324b85a0d37a9daf1908c37d1056ca64e1f6329b55a8082d485864a3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e7845b83d65515e38b79f7b2598a44ec137017eb3e280ccfc5c7d681a20da7d1\"" Sep 6 01:21:12.828699 env[1468]: time="2025-09-06T01:21:12.828675225Z" level=info msg="StartContainer for \"e7845b83d65515e38b79f7b2598a44ec137017eb3e280ccfc5c7d681a20da7d1\"" Sep 6 01:21:12.846999 env[1468]: time="2025-09-06T01:21:12.846941944Z" level=info msg="StartContainer for \"9badf700a591f233cfbd1e9851110b048e54e0fbb44e6425720f2b03ed05a587\" returns successfully" Sep 6 01:21:12.853498 env[1468]: time="2025-09-06T01:21:12.853192782Z" level=info msg="CreateContainer within sandbox \"6ca9f6c7d37986ac98528a87b46c95c1b13c42b828357a1fb23d2a6bd143ee45\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"99082bc02121fa24c11a06135719882b7d7e2ee6a72e1d610500ad3a5275eadc\"" Sep 6 01:21:12.853322 systemd[1]: Started cri-containerd-e7845b83d65515e38b79f7b2598a44ec137017eb3e280ccfc5c7d681a20da7d1.scope. Sep 6 01:21:12.854513 env[1468]: time="2025-09-06T01:21:12.854165696Z" level=info msg="StartContainer for \"99082bc02121fa24c11a06135719882b7d7e2ee6a72e1d610500ad3a5275eadc\"" Sep 6 01:21:12.880410 systemd[1]: Started cri-containerd-99082bc02121fa24c11a06135719882b7d7e2ee6a72e1d610500ad3a5275eadc.scope. Sep 6 01:21:12.906589 env[1468]: time="2025-09-06T01:21:12.906520510Z" level=info msg="StartContainer for \"e7845b83d65515e38b79f7b2598a44ec137017eb3e280ccfc5c7d681a20da7d1\" returns successfully" Sep 6 01:21:12.937097 kubelet[2100]: I0906 01:21:12.936776 2100 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:12.937389 kubelet[2100]: E0906 01:21:12.937360 2100 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:12.950132 env[1468]: time="2025-09-06T01:21:12.950059822Z" level=info msg="StartContainer for \"99082bc02121fa24c11a06135719882b7d7e2ee6a72e1d610500ad3a5275eadc\" returns successfully" Sep 6 01:21:13.351091 kubelet[2100]: E0906 01:21:13.351051 2100 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-4d72badcbe\" not found" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:13.352995 kubelet[2100]: E0906 01:21:13.352960 2100 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-4d72badcbe\" not found" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:13.354871 kubelet[2100]: E0906 01:21:13.354850 2100 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-4d72badcbe\" not found" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:14.356561 kubelet[2100]: E0906 01:21:14.356531 2100 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-4d72badcbe\" not found" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:14.357094 kubelet[2100]: E0906 01:21:14.356531 2100 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-4d72badcbe\" not found" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:14.539761 kubelet[2100]: I0906 01:21:14.539727 2100 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:15.253071 kubelet[2100]: I0906 01:21:15.253033 2100 apiserver.go:52] "Watching apiserver" Sep 6 01:21:15.348413 kubelet[2100]: E0906 01:21:15.348372 2100 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-4d72badcbe\" not found" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:15.366541 kubelet[2100]: I0906 01:21:15.366505 2100 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 01:21:15.438342 kubelet[2100]: I0906 01:21:15.438300 2100 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:15.466057 kubelet[2100]: I0906 01:21:15.466024 2100 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:15.637415 kubelet[2100]: E0906 01:21:15.637301 2100 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-4d72badcbe\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:15.637415 kubelet[2100]: I0906 01:21:15.637337 2100 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:15.639186 kubelet[2100]: E0906 01:21:15.639155 2100 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-4d72badcbe\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:15.639186 kubelet[2100]: I0906 01:21:15.639185 2100 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:15.640809 kubelet[2100]: E0906 01:21:15.640780 2100 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-4d72badcbe\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:16.483377 kubelet[2100]: I0906 01:21:16.483349 2100 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:16.580592 kubelet[2100]: I0906 01:21:16.580553 2100 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 6 01:21:16.942876 kubelet[2100]: I0906 01:21:16.942849 2100 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:16.963545 kubelet[2100]: I0906 01:21:16.963518 2100 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 6 01:21:19.381679 systemd[1]: Reloading. Sep 6 01:21:19.461678 /usr/lib/systemd/system-generators/torcx-generator[2406]: time="2025-09-06T01:21:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:21:19.461714 /usr/lib/systemd/system-generators/torcx-generator[2406]: time="2025-09-06T01:21:19Z" level=info msg="torcx already run" Sep 6 01:21:19.502022 kubelet[2100]: I0906 01:21:19.501982 2100 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:19.512517 kubelet[2100]: I0906 01:21:19.512387 2100 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 6 01:21:19.574931 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:21:19.575128 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:21:19.591342 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:21:19.715475 kubelet[2100]: I0906 01:21:19.715286 2100 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 01:21:19.716425 systemd[1]: Stopping kubelet.service... Sep 6 01:21:19.740607 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 01:21:19.740940 systemd[1]: Stopped kubelet.service. Sep 6 01:21:19.741073 systemd[1]: kubelet.service: Consumed 1.298s CPU time. Sep 6 01:21:19.742855 systemd[1]: Starting kubelet.service... Sep 6 01:21:19.838803 systemd[1]: Started kubelet.service. Sep 6 01:21:19.883101 kubelet[2471]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:21:19.883465 kubelet[2471]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 01:21:19.883509 kubelet[2471]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:21:19.883703 kubelet[2471]: I0906 01:21:19.883674 2471 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 01:21:19.890545 kubelet[2471]: I0906 01:21:19.890514 2471 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 6 01:21:19.890739 kubelet[2471]: I0906 01:21:19.890728 2471 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 01:21:19.891019 kubelet[2471]: I0906 01:21:19.891004 2471 server.go:956] "Client rotation is on, will bootstrap in background" Sep 6 01:21:19.892303 kubelet[2471]: I0906 01:21:19.892285 2471 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 6 01:21:19.936570 kubelet[2471]: I0906 01:21:19.936164 2471 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 01:21:19.940680 kubelet[2471]: E0906 01:21:19.940631 2471 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 01:21:19.940680 kubelet[2471]: I0906 01:21:19.940666 2471 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 01:21:19.945848 kubelet[2471]: I0906 01:21:19.945692 2471 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 01:21:19.946425 kubelet[2471]: I0906 01:21:19.946386 2471 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 01:21:19.946576 kubelet[2471]: I0906 01:21:19.946422 2471 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-4d72badcbe","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 01:21:19.946681 kubelet[2471]: I0906 01:21:19.946581 2471 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 01:21:19.946681 kubelet[2471]: I0906 01:21:19.946592 2471 container_manager_linux.go:303] "Creating device plugin manager" Sep 6 01:21:19.946681 kubelet[2471]: I0906 01:21:19.946635 2471 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:21:19.946774 kubelet[2471]: I0906 01:21:19.946756 2471 kubelet.go:480] "Attempting to sync node with API server" Sep 6 01:21:19.946774 kubelet[2471]: I0906 01:21:19.946772 2471 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 01:21:19.946838 kubelet[2471]: I0906 01:21:19.946796 2471 kubelet.go:386] "Adding apiserver pod source" Sep 6 01:21:19.946838 kubelet[2471]: I0906 01:21:19.946810 2471 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 01:21:19.947892 kubelet[2471]: I0906 01:21:19.947855 2471 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 01:21:19.948610 kubelet[2471]: I0906 01:21:19.948460 2471 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 6 01:21:19.950548 kubelet[2471]: I0906 01:21:19.950523 2471 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 01:21:19.950618 kubelet[2471]: I0906 01:21:19.950560 2471 server.go:1289] "Started kubelet" Sep 6 01:21:19.953866 kubelet[2471]: I0906 01:21:19.953842 2471 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 01:21:19.954683 kubelet[2471]: I0906 01:21:19.954666 2471 server.go:317] "Adding debug handlers to kubelet server" Sep 6 01:21:19.966150 kubelet[2471]: I0906 01:21:19.965954 2471 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 01:21:19.967330 kubelet[2471]: I0906 01:21:19.966273 2471 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 01:21:19.970506 kubelet[2471]: I0906 01:21:19.970484 2471 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 01:21:19.985316 kubelet[2471]: I0906 01:21:19.985289 2471 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 01:21:19.986509 kubelet[2471]: I0906 01:21:19.986496 2471 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 01:21:19.986796 kubelet[2471]: E0906 01:21:19.986778 2471 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-4d72badcbe\" not found" Sep 6 01:21:19.990028 kubelet[2471]: I0906 01:21:19.990010 2471 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 01:21:19.990235 kubelet[2471]: I0906 01:21:19.990225 2471 reconciler.go:26] "Reconciler: start to sync state" Sep 6 01:21:19.993274 kubelet[2471]: I0906 01:21:19.993256 2471 factory.go:223] Registration of the systemd container factory successfully Sep 6 01:21:19.993469 kubelet[2471]: I0906 01:21:19.993450 2471 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 01:21:19.996622 kubelet[2471]: E0906 01:21:19.996603 2471 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 01:21:19.998695 kubelet[2471]: I0906 01:21:19.998640 2471 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 6 01:21:19.998803 kubelet[2471]: I0906 01:21:19.998788 2471 factory.go:223] Registration of the containerd container factory successfully Sep 6 01:21:20.001927 kubelet[2471]: I0906 01:21:20.001889 2471 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 6 01:21:20.001927 kubelet[2471]: I0906 01:21:20.001922 2471 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 6 01:21:20.002023 kubelet[2471]: I0906 01:21:20.001945 2471 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 01:21:20.002023 kubelet[2471]: I0906 01:21:20.001952 2471 kubelet.go:2436] "Starting kubelet main sync loop" Sep 6 01:21:20.002023 kubelet[2471]: E0906 01:21:20.001991 2471 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 01:21:20.056371 kubelet[2471]: I0906 01:21:20.055684 2471 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 01:21:20.056371 kubelet[2471]: I0906 01:21:20.055707 2471 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 01:21:20.056371 kubelet[2471]: I0906 01:21:20.055749 2471 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:21:20.056371 kubelet[2471]: I0906 01:21:20.055937 2471 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 01:21:20.056371 kubelet[2471]: I0906 01:21:20.055950 2471 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 01:21:20.056371 kubelet[2471]: I0906 01:21:20.055968 2471 policy_none.go:49] "None policy: Start" Sep 6 01:21:20.056371 kubelet[2471]: I0906 01:21:20.055999 2471 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 01:21:20.056371 kubelet[2471]: I0906 01:21:20.056011 2471 state_mem.go:35] "Initializing new in-memory state store" Sep 6 01:21:20.056371 kubelet[2471]: I0906 01:21:20.056160 2471 state_mem.go:75] "Updated machine memory state" Sep 6 01:21:20.060594 kubelet[2471]: E0906 01:21:20.060554 2471 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 6 01:21:20.061149 kubelet[2471]: I0906 01:21:20.061094 2471 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 01:21:20.061226 kubelet[2471]: I0906 01:21:20.061144 2471 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 01:21:20.061452 kubelet[2471]: I0906 01:21:20.061427 2471 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 01:21:20.063829 kubelet[2471]: E0906 01:21:20.063481 2471 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 01:21:20.103101 kubelet[2471]: I0906 01:21:20.103071 2471 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:20.103408 kubelet[2471]: I0906 01:21:20.103374 2471 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:20.103488 kubelet[2471]: I0906 01:21:20.103251 2471 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:20.163453 kubelet[2471]: I0906 01:21:20.163412 2471 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:20.224915 kubelet[2471]: I0906 01:21:20.224792 2471 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 6 01:21:20.224915 kubelet[2471]: E0906 01:21:20.224873 2471 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-4d72badcbe\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:20.225065 kubelet[2471]: I0906 01:21:20.224958 2471 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 6 01:21:20.225065 kubelet[2471]: I0906 01:21:20.225005 2471 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 6 01:21:20.225065 kubelet[2471]: E0906 01:21:20.225028 2471 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-4d72badcbe\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:20.225157 kubelet[2471]: E0906 01:21:20.225072 2471 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-4d72badcbe\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:20.291315 kubelet[2471]: I0906 01:21:20.291264 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/096565ec8861228478fcd950e407b7e8-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-4d72badcbe\" (UID: \"096565ec8861228478fcd950e407b7e8\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:20.291315 kubelet[2471]: I0906 01:21:20.291316 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/096565ec8861228478fcd950e407b7e8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-4d72badcbe\" (UID: \"096565ec8861228478fcd950e407b7e8\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:20.291502 kubelet[2471]: I0906 01:21:20.291337 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff72a2baf55121cb9a229715959843ce-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-4d72badcbe\" (UID: \"ff72a2baf55121cb9a229715959843ce\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:20.291502 kubelet[2471]: I0906 01:21:20.291356 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff72a2baf55121cb9a229715959843ce-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-4d72badcbe\" (UID: \"ff72a2baf55121cb9a229715959843ce\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:20.291502 kubelet[2471]: I0906 01:21:20.291486 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff72a2baf55121cb9a229715959843ce-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-4d72badcbe\" (UID: \"ff72a2baf55121cb9a229715959843ce\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:20.291590 kubelet[2471]: I0906 01:21:20.291505 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/096565ec8861228478fcd950e407b7e8-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-4d72badcbe\" (UID: \"096565ec8861228478fcd950e407b7e8\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:20.291590 kubelet[2471]: I0906 01:21:20.291523 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ff72a2baf55121cb9a229715959843ce-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-4d72badcbe\" (UID: \"ff72a2baf55121cb9a229715959843ce\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:20.291590 kubelet[2471]: I0906 01:21:20.291564 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff72a2baf55121cb9a229715959843ce-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-4d72badcbe\" (UID: \"ff72a2baf55121cb9a229715959843ce\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:20.291590 kubelet[2471]: I0906 01:21:20.291580 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c0593eb000502dfb9c1564f3c962831b-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-4d72badcbe\" (UID: \"c0593eb000502dfb9c1564f3c962831b\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:20.334244 kubelet[2471]: I0906 01:21:20.334209 2471 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:20.334360 kubelet[2471]: I0906 01:21:20.334304 2471 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:20.454977 sudo[2506]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 01:21:20.455269 sudo[2506]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 01:21:20.948035 kubelet[2471]: I0906 01:21:20.947988 2471 apiserver.go:52] "Watching apiserver" Sep 6 01:21:20.957178 sudo[2506]: pam_unix(sudo:session): session closed for user root Sep 6 01:21:20.990689 kubelet[2471]: I0906 01:21:20.990639 2471 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 01:21:21.046643 kubelet[2471]: I0906 01:21:21.046615 2471 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:21.047167 kubelet[2471]: I0906 01:21:21.047154 2471 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:21.047448 kubelet[2471]: I0906 01:21:21.047420 2471 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:21.085126 kubelet[2471]: I0906 01:21:21.084971 2471 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 6 01:21:21.085126 kubelet[2471]: E0906 01:21:21.085049 2471 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-4d72badcbe\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:21.085727 kubelet[2471]: I0906 01:21:21.085692 2471 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 6 01:21:21.085804 kubelet[2471]: E0906 01:21:21.085737 2471 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-4d72badcbe\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:21.085884 kubelet[2471]: I0906 01:21:21.085868 2471 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 6 01:21:21.085967 kubelet[2471]: E0906 01:21:21.085954 2471 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-4d72badcbe\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-4d72badcbe" Sep 6 01:21:21.185357 kubelet[2471]: I0906 01:21:21.185300 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-4d72badcbe" podStartSLOduration=5.185283876 podStartE2EDuration="5.185283876s" podCreationTimestamp="2025-09-06 01:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:21:21.085954472 +0000 UTC m=+1.240399888" watchObservedRunningTime="2025-09-06 01:21:21.185283876 +0000 UTC m=+1.339729252" Sep 6 01:21:21.228485 kubelet[2471]: I0906 01:21:21.228352 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-4d72badcbe" podStartSLOduration=2.228319452 podStartE2EDuration="2.228319452s" podCreationTimestamp="2025-09-06 01:21:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:21:21.18639903 +0000 UTC m=+1.340844446" watchObservedRunningTime="2025-09-06 01:21:21.228319452 +0000 UTC m=+1.382764868" Sep 6 01:21:21.228635 kubelet[2471]: I0906 01:21:21.228478 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-4d72badcbe" podStartSLOduration=5.228472291 podStartE2EDuration="5.228472291s" podCreationTimestamp="2025-09-06 01:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:21:21.228242092 +0000 UTC m=+1.382687508" watchObservedRunningTime="2025-09-06 01:21:21.228472291 +0000 UTC m=+1.382917707" Sep 6 01:21:22.769913 sudo[1765]: pam_unix(sudo:session): session closed for user root Sep 6 01:21:22.857985 sshd[1762]: pam_unix(sshd:session): session closed for user core Sep 6 01:21:22.860745 systemd[1]: sshd@4-10.200.20.15:22-10.200.16.10:56900.service: Deactivated successfully. Sep 6 01:21:22.860943 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. Sep 6 01:21:22.861431 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 01:21:22.861587 systemd[1]: session-7.scope: Consumed 7.057s CPU time. Sep 6 01:21:22.862504 systemd-logind[1457]: Removed session 7. Sep 6 01:21:23.869126 kubelet[2471]: I0906 01:21:23.869084 2471 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 01:21:23.869808 env[1468]: time="2025-09-06T01:21:23.869773198Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 01:21:23.870173 kubelet[2471]: I0906 01:21:23.870155 2471 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 01:21:24.724749 systemd[1]: Created slice kubepods-besteffort-pod45fc45df_793f_4733_b1e8_17deb4c8ef67.slice. Sep 6 01:21:24.741015 systemd[1]: Created slice kubepods-burstable-pod7be46365_86ee_458b_93b8_831c3a8a078e.slice. Sep 6 01:21:24.813287 kubelet[2471]: I0906 01:21:24.813250 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-bpf-maps\") pod \"cilium-6476x\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " pod="kube-system/cilium-6476x" Sep 6 01:21:24.813652 kubelet[2471]: I0906 01:21:24.813608 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-cilium-cgroup\") pod \"cilium-6476x\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " pod="kube-system/cilium-6476x" Sep 6 01:21:24.813717 kubelet[2471]: I0906 01:21:24.813666 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-cni-path\") pod \"cilium-6476x\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " pod="kube-system/cilium-6476x" Sep 6 01:21:24.813717 kubelet[2471]: I0906 01:21:24.813688 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7be46365-86ee-458b-93b8-831c3a8a078e-hubble-tls\") pod \"cilium-6476x\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " pod="kube-system/cilium-6476x" Sep 6 01:21:24.813801 kubelet[2471]: I0906 01:21:24.813710 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45fc45df-793f-4733-b1e8-17deb4c8ef67-lib-modules\") pod \"kube-proxy-rmhp8\" (UID: \"45fc45df-793f-4733-b1e8-17deb4c8ef67\") " pod="kube-system/kube-proxy-rmhp8" Sep 6 01:21:24.813801 kubelet[2471]: I0906 01:21:24.813740 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-cilium-run\") pod \"cilium-6476x\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " pod="kube-system/cilium-6476x" Sep 6 01:21:24.813801 kubelet[2471]: I0906 01:21:24.813757 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64qqh\" (UniqueName: \"kubernetes.io/projected/7be46365-86ee-458b-93b8-831c3a8a078e-kube-api-access-64qqh\") pod \"cilium-6476x\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " pod="kube-system/cilium-6476x" Sep 6 01:21:24.813801 kubelet[2471]: I0906 01:21:24.813779 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wpqk\" (UniqueName: \"kubernetes.io/projected/45fc45df-793f-4733-b1e8-17deb4c8ef67-kube-api-access-2wpqk\") pod \"kube-proxy-rmhp8\" (UID: \"45fc45df-793f-4733-b1e8-17deb4c8ef67\") " pod="kube-system/kube-proxy-rmhp8" Sep 6 01:21:24.813892 kubelet[2471]: I0906 01:21:24.813810 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-hostproc\") pod \"cilium-6476x\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " pod="kube-system/cilium-6476x" Sep 6 01:21:24.813892 kubelet[2471]: I0906 01:21:24.813829 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-lib-modules\") pod \"cilium-6476x\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " pod="kube-system/cilium-6476x" Sep 6 01:21:24.813892 kubelet[2471]: I0906 01:21:24.813847 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-xtables-lock\") pod \"cilium-6476x\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " pod="kube-system/cilium-6476x" Sep 6 01:21:24.813892 kubelet[2471]: I0906 01:21:24.813866 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7be46365-86ee-458b-93b8-831c3a8a078e-cilium-config-path\") pod \"cilium-6476x\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " pod="kube-system/cilium-6476x" Sep 6 01:21:24.813979 kubelet[2471]: I0906 01:21:24.813896 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-host-proc-sys-net\") pod \"cilium-6476x\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " pod="kube-system/cilium-6476x" Sep 6 01:21:24.813979 kubelet[2471]: I0906 01:21:24.813916 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/45fc45df-793f-4733-b1e8-17deb4c8ef67-kube-proxy\") pod \"kube-proxy-rmhp8\" (UID: \"45fc45df-793f-4733-b1e8-17deb4c8ef67\") " pod="kube-system/kube-proxy-rmhp8" Sep 6 01:21:24.813979 kubelet[2471]: I0906 01:21:24.813934 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45fc45df-793f-4733-b1e8-17deb4c8ef67-xtables-lock\") pod \"kube-proxy-rmhp8\" (UID: \"45fc45df-793f-4733-b1e8-17deb4c8ef67\") " pod="kube-system/kube-proxy-rmhp8" Sep 6 01:21:24.813979 kubelet[2471]: I0906 01:21:24.813965 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-etc-cni-netd\") pod \"cilium-6476x\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " pod="kube-system/cilium-6476x" Sep 6 01:21:24.814066 kubelet[2471]: I0906 01:21:24.813982 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7be46365-86ee-458b-93b8-831c3a8a078e-clustermesh-secrets\") pod \"cilium-6476x\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " pod="kube-system/cilium-6476x" Sep 6 01:21:24.814066 kubelet[2471]: I0906 01:21:24.814000 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-host-proc-sys-kernel\") pod \"cilium-6476x\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " pod="kube-system/cilium-6476x" Sep 6 01:21:24.919570 kubelet[2471]: I0906 01:21:24.919529 2471 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 01:21:25.039124 env[1468]: time="2025-09-06T01:21:25.038620602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rmhp8,Uid:45fc45df-793f-4733-b1e8-17deb4c8ef67,Namespace:kube-system,Attempt:0,}" Sep 6 01:21:25.044400 env[1468]: time="2025-09-06T01:21:25.044175936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6476x,Uid:7be46365-86ee-458b-93b8-831c3a8a078e,Namespace:kube-system,Attempt:0,}" Sep 6 01:21:25.090420 env[1468]: time="2025-09-06T01:21:25.090354760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:21:25.090609 env[1468]: time="2025-09-06T01:21:25.090586998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:21:25.090721 env[1468]: time="2025-09-06T01:21:25.090699158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:21:25.090956 env[1468]: time="2025-09-06T01:21:25.090926357Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/40af036e04be59a411c011cd04f57b5ab13dc2e4b56dbc9bab709d4bf331d4fd pid=2555 runtime=io.containerd.runc.v2 Sep 6 01:21:25.109219 systemd[1]: Created slice kubepods-besteffort-pod08b91e49_f176_40ac_bde4_d815d9bd2036.slice. Sep 6 01:21:25.116856 kubelet[2471]: I0906 01:21:25.116815 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8jf6\" (UniqueName: \"kubernetes.io/projected/08b91e49-f176-40ac-bde4-d815d9bd2036-kube-api-access-f8jf6\") pod \"cilium-operator-6c4d7847fc-hmqfp\" (UID: \"08b91e49-f176-40ac-bde4-d815d9bd2036\") " pod="kube-system/cilium-operator-6c4d7847fc-hmqfp" Sep 6 01:21:25.116978 kubelet[2471]: I0906 01:21:25.116858 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/08b91e49-f176-40ac-bde4-d815d9bd2036-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-hmqfp\" (UID: \"08b91e49-f176-40ac-bde4-d815d9bd2036\") " pod="kube-system/cilium-operator-6c4d7847fc-hmqfp" Sep 6 01:21:25.120574 systemd[1]: Started cri-containerd-40af036e04be59a411c011cd04f57b5ab13dc2e4b56dbc9bab709d4bf331d4fd.scope. Sep 6 01:21:25.125582 env[1468]: time="2025-09-06T01:21:25.125520755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:21:25.125764 env[1468]: time="2025-09-06T01:21:25.125742274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:21:25.125875 env[1468]: time="2025-09-06T01:21:25.125853753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:21:25.126121 env[1468]: time="2025-09-06T01:21:25.126074832Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed pid=2580 runtime=io.containerd.runc.v2 Sep 6 01:21:25.140009 systemd[1]: Started cri-containerd-fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed.scope. Sep 6 01:21:25.190690 env[1468]: time="2025-09-06T01:21:25.190650969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6476x,Uid:7be46365-86ee-458b-93b8-831c3a8a078e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\"" Sep 6 01:21:25.192611 env[1468]: time="2025-09-06T01:21:25.192578600Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 01:21:25.194423 env[1468]: time="2025-09-06T01:21:25.194381672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rmhp8,Uid:45fc45df-793f-4733-b1e8-17deb4c8ef67,Namespace:kube-system,Attempt:0,} returns sandbox id \"40af036e04be59a411c011cd04f57b5ab13dc2e4b56dbc9bab709d4bf331d4fd\"" Sep 6 01:21:25.202235 env[1468]: time="2025-09-06T01:21:25.202197635Z" level=info msg="CreateContainer within sandbox \"40af036e04be59a411c011cd04f57b5ab13dc2e4b56dbc9bab709d4bf331d4fd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 01:21:25.259802 env[1468]: time="2025-09-06T01:21:25.259757285Z" level=info msg="CreateContainer within sandbox \"40af036e04be59a411c011cd04f57b5ab13dc2e4b56dbc9bab709d4bf331d4fd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"136e601122215b2f7e03cd0b8233f113e561e370e3a97dcdb4ecc7192c2caf23\"" Sep 6 01:21:25.261801 env[1468]: time="2025-09-06T01:21:25.260793080Z" level=info msg="StartContainer for \"136e601122215b2f7e03cd0b8233f113e561e370e3a97dcdb4ecc7192c2caf23\"" Sep 6 01:21:25.277130 systemd[1]: Started cri-containerd-136e601122215b2f7e03cd0b8233f113e561e370e3a97dcdb4ecc7192c2caf23.scope. Sep 6 01:21:25.314782 env[1468]: time="2025-09-06T01:21:25.314668468Z" level=info msg="StartContainer for \"136e601122215b2f7e03cd0b8233f113e561e370e3a97dcdb4ecc7192c2caf23\" returns successfully" Sep 6 01:21:25.413424 env[1468]: time="2025-09-06T01:21:25.413378965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hmqfp,Uid:08b91e49-f176-40ac-bde4-d815d9bd2036,Namespace:kube-system,Attempt:0,}" Sep 6 01:21:25.469555 env[1468]: time="2025-09-06T01:21:25.469375142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:21:25.469555 env[1468]: time="2025-09-06T01:21:25.469522781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:21:25.470165 env[1468]: time="2025-09-06T01:21:25.469533581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:21:25.470165 env[1468]: time="2025-09-06T01:21:25.469681901Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee pid=2711 runtime=io.containerd.runc.v2 Sep 6 01:21:25.481707 systemd[1]: Started cri-containerd-70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee.scope. Sep 6 01:21:25.513520 env[1468]: time="2025-09-06T01:21:25.513393736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hmqfp,Uid:08b91e49-f176-40ac-bde4-d815d9bd2036,Namespace:kube-system,Attempt:0,} returns sandbox id \"70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee\"" Sep 6 01:21:26.803241 kubelet[2471]: I0906 01:21:26.802865 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rmhp8" podStartSLOduration=2.8028490230000003 podStartE2EDuration="2.802849023s" podCreationTimestamp="2025-09-06 01:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:21:26.074015355 +0000 UTC m=+6.228460771" watchObservedRunningTime="2025-09-06 01:21:26.802849023 +0000 UTC m=+6.957294439" Sep 6 01:21:30.100269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318829517.mount: Deactivated successfully. Sep 6 01:21:32.308750 env[1468]: time="2025-09-06T01:21:32.308705528Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:32.319045 env[1468]: time="2025-09-06T01:21:32.319004888Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:32.327783 env[1468]: time="2025-09-06T01:21:32.327726534Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:32.328478 env[1468]: time="2025-09-06T01:21:32.328435371Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 6 01:21:32.329944 env[1468]: time="2025-09-06T01:21:32.329906685Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 01:21:32.340422 env[1468]: time="2025-09-06T01:21:32.340385124Z" level=info msg="CreateContainer within sandbox \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 01:21:32.365404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2782152663.mount: Deactivated successfully. Sep 6 01:21:32.371821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1623952704.mount: Deactivated successfully. Sep 6 01:21:32.385978 env[1468]: time="2025-09-06T01:21:32.385922584Z" level=info msg="CreateContainer within sandbox \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293\"" Sep 6 01:21:32.388148 env[1468]: time="2025-09-06T01:21:32.386805101Z" level=info msg="StartContainer for \"99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293\"" Sep 6 01:21:32.405896 systemd[1]: Started cri-containerd-99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293.scope. Sep 6 01:21:32.434115 env[1468]: time="2025-09-06T01:21:32.434050835Z" level=info msg="StartContainer for \"99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293\" returns successfully" Sep 6 01:21:32.442842 systemd[1]: cri-containerd-99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293.scope: Deactivated successfully. Sep 6 01:21:33.362059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293-rootfs.mount: Deactivated successfully. Sep 6 01:21:33.958790 env[1468]: time="2025-09-06T01:21:33.958730961Z" level=info msg="shim disconnected" id=99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293 Sep 6 01:21:33.958790 env[1468]: time="2025-09-06T01:21:33.958777121Z" level=warning msg="cleaning up after shim disconnected" id=99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293 namespace=k8s.io Sep 6 01:21:33.958790 env[1468]: time="2025-09-06T01:21:33.958786361Z" level=info msg="cleaning up dead shim" Sep 6 01:21:33.966676 env[1468]: time="2025-09-06T01:21:33.966623930Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:21:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2886 runtime=io.containerd.runc.v2\n" Sep 6 01:21:34.091352 env[1468]: time="2025-09-06T01:21:34.091300979Z" level=info msg="CreateContainer within sandbox \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 01:21:34.131633 env[1468]: time="2025-09-06T01:21:34.131593068Z" level=info msg="CreateContainer within sandbox \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307\"" Sep 6 01:21:34.132411 env[1468]: time="2025-09-06T01:21:34.132386265Z" level=info msg="StartContainer for \"7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307\"" Sep 6 01:21:34.150203 systemd[1]: Started cri-containerd-7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307.scope. Sep 6 01:21:34.179167 env[1468]: time="2025-09-06T01:21:34.177500376Z" level=info msg="StartContainer for \"7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307\" returns successfully" Sep 6 01:21:34.192925 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 01:21:34.193131 systemd[1]: Stopped systemd-sysctl.service. Sep 6 01:21:34.193895 systemd[1]: Stopping systemd-sysctl.service... Sep 6 01:21:34.195517 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:21:34.201663 systemd[1]: cri-containerd-7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307.scope: Deactivated successfully. Sep 6 01:21:34.203714 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:21:34.231674 env[1468]: time="2025-09-06T01:21:34.230948495Z" level=info msg="shim disconnected" id=7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307 Sep 6 01:21:34.231674 env[1468]: time="2025-09-06T01:21:34.230995855Z" level=warning msg="cleaning up after shim disconnected" id=7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307 namespace=k8s.io Sep 6 01:21:34.231674 env[1468]: time="2025-09-06T01:21:34.231006095Z" level=info msg="cleaning up dead shim" Sep 6 01:21:34.238454 env[1468]: time="2025-09-06T01:21:34.238388148Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:21:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2950 runtime=io.containerd.runc.v2\n" Sep 6 01:21:34.362314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307-rootfs.mount: Deactivated successfully. Sep 6 01:21:35.096758 env[1468]: time="2025-09-06T01:21:35.096710175Z" level=info msg="CreateContainer within sandbox \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 01:21:35.132554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1428589667.mount: Deactivated successfully. Sep 6 01:21:35.179709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2554275089.mount: Deactivated successfully. Sep 6 01:21:35.204420 env[1468]: time="2025-09-06T01:21:35.204372141Z" level=info msg="CreateContainer within sandbox \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f\"" Sep 6 01:21:35.206758 env[1468]: time="2025-09-06T01:21:35.206726852Z" level=info msg="StartContainer for \"9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f\"" Sep 6 01:21:35.228854 systemd[1]: Started cri-containerd-9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f.scope. Sep 6 01:21:35.260330 systemd[1]: cri-containerd-9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f.scope: Deactivated successfully. Sep 6 01:21:35.274850 env[1468]: time="2025-09-06T01:21:35.274806363Z" level=info msg="StartContainer for \"9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f\" returns successfully" Sep 6 01:21:35.307989 env[1468]: time="2025-09-06T01:21:35.307944481Z" level=info msg="shim disconnected" id=9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f Sep 6 01:21:35.308260 env[1468]: time="2025-09-06T01:21:35.308240000Z" level=warning msg="cleaning up after shim disconnected" id=9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f namespace=k8s.io Sep 6 01:21:35.308339 env[1468]: time="2025-09-06T01:21:35.308325800Z" level=info msg="cleaning up dead shim" Sep 6 01:21:35.318900 env[1468]: time="2025-09-06T01:21:35.318861641Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:21:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3007 runtime=io.containerd.runc.v2\n" Sep 6 01:21:35.797606 env[1468]: time="2025-09-06T01:21:35.797550928Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:35.811153 env[1468]: time="2025-09-06T01:21:35.811096678Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:35.816866 env[1468]: time="2025-09-06T01:21:35.816837217Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:35.819194 env[1468]: time="2025-09-06T01:21:35.818097692Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 6 01:21:35.825797 env[1468]: time="2025-09-06T01:21:35.825767224Z" level=info msg="CreateContainer within sandbox \"70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 01:21:35.859083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1277274473.mount: Deactivated successfully. Sep 6 01:21:35.874020 env[1468]: time="2025-09-06T01:21:35.873975448Z" level=info msg="CreateContainer within sandbox \"70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106\"" Sep 6 01:21:35.876302 env[1468]: time="2025-09-06T01:21:35.876232799Z" level=info msg="StartContainer for \"3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106\"" Sep 6 01:21:35.894996 systemd[1]: Started cri-containerd-3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106.scope. Sep 6 01:21:35.926026 env[1468]: time="2025-09-06T01:21:35.925980577Z" level=info msg="StartContainer for \"3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106\" returns successfully" Sep 6 01:21:36.100966 env[1468]: time="2025-09-06T01:21:36.100597146Z" level=info msg="CreateContainer within sandbox \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 01:21:36.146938 env[1468]: time="2025-09-06T01:21:36.146888540Z" level=info msg="CreateContainer within sandbox \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2\"" Sep 6 01:21:36.147839 env[1468]: time="2025-09-06T01:21:36.147811497Z" level=info msg="StartContainer for \"174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2\"" Sep 6 01:21:36.162525 systemd[1]: Started cri-containerd-174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2.scope. Sep 6 01:21:36.194532 systemd[1]: cri-containerd-174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2.scope: Deactivated successfully. Sep 6 01:21:36.204506 env[1468]: time="2025-09-06T01:21:36.204368054Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7be46365_86ee_458b_93b8_831c3a8a078e.slice/cri-containerd-174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2.scope/memory.events\": no such file or directory" Sep 6 01:21:36.207205 env[1468]: time="2025-09-06T01:21:36.207169844Z" level=info msg="StartContainer for \"174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2\" returns successfully" Sep 6 01:21:36.497563 kubelet[2471]: I0906 01:21:36.351051 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-hmqfp" podStartSLOduration=1.046086213 podStartE2EDuration="11.35100321s" podCreationTimestamp="2025-09-06 01:21:25 +0000 UTC" firstStartedPulling="2025-09-06 01:21:25.514795809 +0000 UTC m=+5.669241225" lastFinishedPulling="2025-09-06 01:21:35.819712806 +0000 UTC m=+15.974158222" observedRunningTime="2025-09-06 01:21:36.178441427 +0000 UTC m=+16.332886803" watchObservedRunningTime="2025-09-06 01:21:36.35100321 +0000 UTC m=+16.505448626" Sep 6 01:21:36.547742 env[1468]: time="2025-09-06T01:21:36.547696466Z" level=info msg="shim disconnected" id=174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2 Sep 6 01:21:36.547993 env[1468]: time="2025-09-06T01:21:36.547974105Z" level=warning msg="cleaning up after shim disconnected" id=174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2 namespace=k8s.io Sep 6 01:21:36.548081 env[1468]: time="2025-09-06T01:21:36.548067545Z" level=info msg="cleaning up dead shim" Sep 6 01:21:36.557588 env[1468]: time="2025-09-06T01:21:36.557541271Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:21:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3099 runtime=io.containerd.runc.v2\n" Sep 6 01:21:37.104880 env[1468]: time="2025-09-06T01:21:37.104832521Z" level=info msg="CreateContainer within sandbox \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 01:21:37.145893 env[1468]: time="2025-09-06T01:21:37.145841458Z" level=info msg="CreateContainer within sandbox \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a\"" Sep 6 01:21:37.146613 env[1468]: time="2025-09-06T01:21:37.146574015Z" level=info msg="StartContainer for \"8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a\"" Sep 6 01:21:37.173446 systemd[1]: Started cri-containerd-8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a.scope. Sep 6 01:21:37.221336 env[1468]: time="2025-09-06T01:21:37.221260554Z" level=info msg="StartContainer for \"8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a\" returns successfully" Sep 6 01:21:37.307253 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 6 01:21:37.334385 kubelet[2471]: I0906 01:21:37.333306 2471 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 6 01:21:37.444346 systemd[1]: Created slice kubepods-burstable-pod0070d575_4634_49a3_a251_8cb5505cf132.slice. Sep 6 01:21:37.453074 systemd[1]: Created slice kubepods-burstable-pode0ee9249_4d00_49f1_8933_83b040a3b51b.slice. Sep 6 01:21:37.499445 kubelet[2471]: I0906 01:21:37.499407 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0ee9249-4d00-49f1-8933-83b040a3b51b-config-volume\") pod \"coredns-674b8bbfcf-vqcx4\" (UID: \"e0ee9249-4d00-49f1-8933-83b040a3b51b\") " pod="kube-system/coredns-674b8bbfcf-vqcx4" Sep 6 01:21:37.499763 kubelet[2471]: I0906 01:21:37.499469 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcnxs\" (UniqueName: \"kubernetes.io/projected/e0ee9249-4d00-49f1-8933-83b040a3b51b-kube-api-access-vcnxs\") pod \"coredns-674b8bbfcf-vqcx4\" (UID: \"e0ee9249-4d00-49f1-8933-83b040a3b51b\") " pod="kube-system/coredns-674b8bbfcf-vqcx4" Sep 6 01:21:37.499763 kubelet[2471]: I0906 01:21:37.499494 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtvfg\" (UniqueName: \"kubernetes.io/projected/0070d575-4634-49a3-a251-8cb5505cf132-kube-api-access-qtvfg\") pod \"coredns-674b8bbfcf-5bz7p\" (UID: \"0070d575-4634-49a3-a251-8cb5505cf132\") " pod="kube-system/coredns-674b8bbfcf-5bz7p" Sep 6 01:21:37.499763 kubelet[2471]: I0906 01:21:37.499546 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0070d575-4634-49a3-a251-8cb5505cf132-config-volume\") pod \"coredns-674b8bbfcf-5bz7p\" (UID: \"0070d575-4634-49a3-a251-8cb5505cf132\") " pod="kube-system/coredns-674b8bbfcf-5bz7p" Sep 6 01:21:37.715134 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 6 01:21:37.748998 env[1468]: time="2025-09-06T01:21:37.748951310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5bz7p,Uid:0070d575-4634-49a3-a251-8cb5505cf132,Namespace:kube-system,Attempt:0,}" Sep 6 01:21:37.756991 env[1468]: time="2025-09-06T01:21:37.756753883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vqcx4,Uid:e0ee9249-4d00-49f1-8933-83b040a3b51b,Namespace:kube-system,Attempt:0,}" Sep 6 01:21:38.123368 kubelet[2471]: I0906 01:21:38.123242 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6476x" podStartSLOduration=6.985725928 podStartE2EDuration="14.123224212s" podCreationTimestamp="2025-09-06 01:21:24 +0000 UTC" firstStartedPulling="2025-09-06 01:21:25.192257562 +0000 UTC m=+5.346702978" lastFinishedPulling="2025-09-06 01:21:32.329755846 +0000 UTC m=+12.484201262" observedRunningTime="2025-09-06 01:21:38.122325015 +0000 UTC m=+18.276770471" watchObservedRunningTime="2025-09-06 01:21:38.123224212 +0000 UTC m=+18.277669628" Sep 6 01:21:39.433211 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 01:21:39.433582 systemd-networkd[1620]: cilium_host: Link UP Sep 6 01:21:39.433708 systemd-networkd[1620]: cilium_net: Link UP Sep 6 01:21:39.433711 systemd-networkd[1620]: cilium_net: Gained carrier Sep 6 01:21:39.433836 systemd-networkd[1620]: cilium_host: Gained carrier Sep 6 01:21:39.434031 systemd-networkd[1620]: cilium_host: Gained IPv6LL Sep 6 01:21:39.555589 systemd-networkd[1620]: cilium_vxlan: Link UP Sep 6 01:21:39.555595 systemd-networkd[1620]: cilium_vxlan: Gained carrier Sep 6 01:21:39.795140 kernel: NET: Registered PF_ALG protocol family Sep 6 01:21:40.387280 systemd-networkd[1620]: cilium_net: Gained IPv6LL Sep 6 01:21:40.548713 systemd-networkd[1620]: lxc_health: Link UP Sep 6 01:21:40.562407 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 01:21:40.562204 systemd-networkd[1620]: lxc_health: Gained carrier Sep 6 01:21:40.831072 systemd-networkd[1620]: lxcd79eb2e907ff: Link UP Sep 6 01:21:40.839157 kernel: eth0: renamed from tmp08a37 Sep 6 01:21:40.853149 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd79eb2e907ff: link becomes ready Sep 6 01:21:40.852883 systemd-networkd[1620]: lxcd79eb2e907ff: Gained carrier Sep 6 01:21:40.864347 systemd-networkd[1620]: lxc9c63a33eebe9: Link UP Sep 6 01:21:40.878154 kernel: eth0: renamed from tmp59843 Sep 6 01:21:40.889167 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9c63a33eebe9: link becomes ready Sep 6 01:21:40.889138 systemd-networkd[1620]: lxc9c63a33eebe9: Gained carrier Sep 6 01:21:41.411294 systemd-networkd[1620]: cilium_vxlan: Gained IPv6LL Sep 6 01:21:42.052302 systemd-networkd[1620]: lxc_health: Gained IPv6LL Sep 6 01:21:42.115310 systemd-networkd[1620]: lxc9c63a33eebe9: Gained IPv6LL Sep 6 01:21:42.243273 systemd-networkd[1620]: lxcd79eb2e907ff: Gained IPv6LL Sep 6 01:21:44.500031 env[1468]: time="2025-09-06T01:21:44.499942618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:21:44.500031 env[1468]: time="2025-09-06T01:21:44.499983177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:21:44.500031 env[1468]: time="2025-09-06T01:21:44.499996857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:21:44.500606 env[1468]: time="2025-09-06T01:21:44.500549536Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/08a378603710d4a823b7a5d8805fc765957e1f772f34a32425e031f449fe031c pid=3652 runtime=io.containerd.runc.v2 Sep 6 01:21:44.527524 systemd[1]: run-containerd-runc-k8s.io-08a378603710d4a823b7a5d8805fc765957e1f772f34a32425e031f449fe031c-runc.uOHqzn.mount: Deactivated successfully. Sep 6 01:21:44.532124 systemd[1]: Started cri-containerd-08a378603710d4a823b7a5d8805fc765957e1f772f34a32425e031f449fe031c.scope. Sep 6 01:21:44.542517 env[1468]: time="2025-09-06T01:21:44.542213652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:21:44.542517 env[1468]: time="2025-09-06T01:21:44.542334171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:21:44.542517 env[1468]: time="2025-09-06T01:21:44.542344851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:21:44.542517 env[1468]: time="2025-09-06T01:21:44.542456291Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/59843c65ea73d77e7c705e8b24d599550a2d1eda999aa4331d62d03ece0014bd pid=3681 runtime=io.containerd.runc.v2 Sep 6 01:21:44.561473 systemd[1]: Started cri-containerd-59843c65ea73d77e7c705e8b24d599550a2d1eda999aa4331d62d03ece0014bd.scope. Sep 6 01:21:44.601134 env[1468]: time="2025-09-06T01:21:44.601075236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5bz7p,Uid:0070d575-4634-49a3-a251-8cb5505cf132,Namespace:kube-system,Attempt:0,} returns sandbox id \"08a378603710d4a823b7a5d8805fc765957e1f772f34a32425e031f449fe031c\"" Sep 6 01:21:44.613651 env[1468]: time="2025-09-06T01:21:44.613602759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vqcx4,Uid:e0ee9249-4d00-49f1-8933-83b040a3b51b,Namespace:kube-system,Attempt:0,} returns sandbox id \"59843c65ea73d77e7c705e8b24d599550a2d1eda999aa4331d62d03ece0014bd\"" Sep 6 01:21:44.614513 env[1468]: time="2025-09-06T01:21:44.614480236Z" level=info msg="CreateContainer within sandbox \"08a378603710d4a823b7a5d8805fc765957e1f772f34a32425e031f449fe031c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 01:21:44.621351 env[1468]: time="2025-09-06T01:21:44.621312656Z" level=info msg="CreateContainer within sandbox \"59843c65ea73d77e7c705e8b24d599550a2d1eda999aa4331d62d03ece0014bd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 01:21:44.669493 env[1468]: time="2025-09-06T01:21:44.669446632Z" level=info msg="CreateContainer within sandbox \"08a378603710d4a823b7a5d8805fc765957e1f772f34a32425e031f449fe031c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"212b8db29cd3265ced0bddc23f990070de8d10b0606a68e33a3c5e1430a9b030\"" Sep 6 01:21:44.670383 env[1468]: time="2025-09-06T01:21:44.670357030Z" level=info msg="StartContainer for \"212b8db29cd3265ced0bddc23f990070de8d10b0606a68e33a3c5e1430a9b030\"" Sep 6 01:21:44.677754 env[1468]: time="2025-09-06T01:21:44.677707448Z" level=info msg="CreateContainer within sandbox \"59843c65ea73d77e7c705e8b24d599550a2d1eda999aa4331d62d03ece0014bd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f243bb8968c9c44c524702ed99c896625b20781d3c9f02648826d6b5ddebe664\"" Sep 6 01:21:44.679353 env[1468]: time="2025-09-06T01:21:44.679313443Z" level=info msg="StartContainer for \"f243bb8968c9c44c524702ed99c896625b20781d3c9f02648826d6b5ddebe664\"" Sep 6 01:21:44.688410 systemd[1]: Started cri-containerd-212b8db29cd3265ced0bddc23f990070de8d10b0606a68e33a3c5e1430a9b030.scope. Sep 6 01:21:44.704922 systemd[1]: Started cri-containerd-f243bb8968c9c44c524702ed99c896625b20781d3c9f02648826d6b5ddebe664.scope. Sep 6 01:21:44.735706 env[1468]: time="2025-09-06T01:21:44.735651955Z" level=info msg="StartContainer for \"212b8db29cd3265ced0bddc23f990070de8d10b0606a68e33a3c5e1430a9b030\" returns successfully" Sep 6 01:21:44.745549 env[1468]: time="2025-09-06T01:21:44.745503125Z" level=info msg="StartContainer for \"f243bb8968c9c44c524702ed99c896625b20781d3c9f02648826d6b5ddebe664\" returns successfully" Sep 6 01:21:45.133690 kubelet[2471]: I0906 01:21:45.133631 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5bz7p" podStartSLOduration=20.133614457 podStartE2EDuration="20.133614457s" podCreationTimestamp="2025-09-06 01:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:21:45.130223947 +0000 UTC m=+25.284669323" watchObservedRunningTime="2025-09-06 01:21:45.133614457 +0000 UTC m=+25.288059833" Sep 6 01:21:45.270527 kubelet[2471]: I0906 01:21:45.270463 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vqcx4" podStartSLOduration=20.270446378 podStartE2EDuration="20.270446378s" podCreationTimestamp="2025-09-06 01:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:21:45.2696619 +0000 UTC m=+25.424107276" watchObservedRunningTime="2025-09-06 01:21:45.270446378 +0000 UTC m=+25.424891794" Sep 6 01:21:51.520248 kubelet[2471]: I0906 01:21:51.520208 2471 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 01:23:24.097813 systemd[1]: Started sshd@5-10.200.20.15:22-10.200.16.10:53008.service. Sep 6 01:23:24.546955 sshd[3830]: Accepted publickey for core from 10.200.16.10 port 53008 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:24.548365 sshd[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:24.552609 systemd-logind[1457]: New session 8 of user core. Sep 6 01:23:24.553095 systemd[1]: Started session-8.scope. Sep 6 01:23:24.980322 sshd[3830]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:24.983520 systemd[1]: sshd@5-10.200.20.15:22-10.200.16.10:53008.service: Deactivated successfully. Sep 6 01:23:24.983692 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. Sep 6 01:23:24.984315 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 01:23:24.985304 systemd-logind[1457]: Removed session 8. Sep 6 01:23:30.056318 systemd[1]: Started sshd@6-10.200.20.15:22-10.200.16.10:46962.service. Sep 6 01:23:30.505760 sshd[3845]: Accepted publickey for core from 10.200.16.10 port 46962 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:30.507022 sshd[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:30.511486 systemd[1]: Started session-9.scope. Sep 6 01:23:30.511831 systemd-logind[1457]: New session 9 of user core. Sep 6 01:23:30.913703 sshd[3845]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:30.917020 systemd[1]: sshd@6-10.200.20.15:22-10.200.16.10:46962.service: Deactivated successfully. Sep 6 01:23:30.917737 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 01:23:30.918161 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. Sep 6 01:23:30.918845 systemd-logind[1457]: Removed session 9. Sep 6 01:23:35.982921 systemd[1]: Started sshd@7-10.200.20.15:22-10.200.16.10:46964.service. Sep 6 01:23:36.392942 sshd[3858]: Accepted publickey for core from 10.200.16.10 port 46964 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:36.394362 sshd[3858]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:36.399372 systemd[1]: Started session-10.scope. Sep 6 01:23:36.399543 systemd-logind[1457]: New session 10 of user core. Sep 6 01:23:36.773079 sshd[3858]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:36.776081 systemd[1]: sshd@7-10.200.20.15:22-10.200.16.10:46964.service: Deactivated successfully. Sep 6 01:23:36.776304 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. Sep 6 01:23:36.776811 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 01:23:36.777515 systemd-logind[1457]: Removed session 10. Sep 6 01:23:41.842462 systemd[1]: Started sshd@8-10.200.20.15:22-10.200.16.10:47226.service. Sep 6 01:23:42.252536 sshd[3870]: Accepted publickey for core from 10.200.16.10 port 47226 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:42.254269 sshd[3870]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:42.258052 systemd-logind[1457]: New session 11 of user core. Sep 6 01:23:42.258624 systemd[1]: Started session-11.scope. Sep 6 01:23:42.629946 sshd[3870]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:42.633100 systemd[1]: sshd@8-10.200.20.15:22-10.200.16.10:47226.service: Deactivated successfully. Sep 6 01:23:42.633870 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 01:23:42.634815 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. Sep 6 01:23:42.635715 systemd-logind[1457]: Removed session 11. Sep 6 01:23:42.698922 systemd[1]: Started sshd@9-10.200.20.15:22-10.200.16.10:47234.service. Sep 6 01:23:43.108938 sshd[3882]: Accepted publickey for core from 10.200.16.10 port 47234 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:43.110595 sshd[3882]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:43.114435 systemd-logind[1457]: New session 12 of user core. Sep 6 01:23:43.114921 systemd[1]: Started session-12.scope. Sep 6 01:23:43.529214 sshd[3882]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:43.532187 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. Sep 6 01:23:43.532382 systemd[1]: sshd@9-10.200.20.15:22-10.200.16.10:47234.service: Deactivated successfully. Sep 6 01:23:43.533086 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 01:23:43.533823 systemd-logind[1457]: Removed session 12. Sep 6 01:23:43.597864 systemd[1]: Started sshd@10-10.200.20.15:22-10.200.16.10:47250.service. Sep 6 01:23:44.008026 sshd[3893]: Accepted publickey for core from 10.200.16.10 port 47250 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:44.009410 sshd[3893]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:44.013772 systemd-logind[1457]: New session 13 of user core. Sep 6 01:23:44.014306 systemd[1]: Started session-13.scope. Sep 6 01:23:44.396510 sshd[3893]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:44.399701 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. Sep 6 01:23:44.399710 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 01:23:44.400309 systemd[1]: sshd@10-10.200.20.15:22-10.200.16.10:47250.service: Deactivated successfully. Sep 6 01:23:44.404332 systemd-logind[1457]: Removed session 13. Sep 6 01:23:49.479184 systemd[1]: Started sshd@11-10.200.20.15:22-10.200.16.10:47262.service. Sep 6 01:23:49.930323 sshd[3905]: Accepted publickey for core from 10.200.16.10 port 47262 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:49.932019 sshd[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:49.936846 systemd[1]: Started session-14.scope. Sep 6 01:23:49.937359 systemd-logind[1457]: New session 14 of user core. Sep 6 01:23:50.333654 sshd[3905]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:50.336704 systemd[1]: sshd@11-10.200.20.15:22-10.200.16.10:47262.service: Deactivated successfully. Sep 6 01:23:50.337510 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 01:23:50.338808 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. Sep 6 01:23:50.340083 systemd-logind[1457]: Removed session 14. Sep 6 01:23:55.414021 systemd[1]: Started sshd@12-10.200.20.15:22-10.200.16.10:35248.service. Sep 6 01:23:55.864421 sshd[3917]: Accepted publickey for core from 10.200.16.10 port 35248 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:55.865783 sshd[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:55.870286 systemd[1]: Started session-15.scope. Sep 6 01:23:55.870588 systemd-logind[1457]: New session 15 of user core. Sep 6 01:23:56.271547 sshd[3917]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:56.274070 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 01:23:56.274797 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. Sep 6 01:23:56.274925 systemd[1]: sshd@12-10.200.20.15:22-10.200.16.10:35248.service: Deactivated successfully. Sep 6 01:23:56.276038 systemd-logind[1457]: Removed session 15. Sep 6 01:23:56.334398 systemd[1]: Started sshd@13-10.200.20.15:22-10.200.16.10:35258.service. Sep 6 01:23:56.752036 sshd[3931]: Accepted publickey for core from 10.200.16.10 port 35258 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:56.753733 sshd[3931]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:56.758086 systemd[1]: Started session-16.scope. Sep 6 01:23:56.759208 systemd-logind[1457]: New session 16 of user core. Sep 6 01:23:57.159825 sshd[3931]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:57.163531 systemd[1]: sshd@13-10.200.20.15:22-10.200.16.10:35258.service: Deactivated successfully. Sep 6 01:23:57.163742 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. Sep 6 01:23:57.164278 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 01:23:57.165204 systemd-logind[1457]: Removed session 16. Sep 6 01:23:57.228854 systemd[1]: Started sshd@14-10.200.20.15:22-10.200.16.10:35262.service. Sep 6 01:23:57.640345 sshd[3940]: Accepted publickey for core from 10.200.16.10 port 35262 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:57.642050 sshd[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:57.646419 systemd[1]: Started session-17.scope. Sep 6 01:23:57.646880 systemd-logind[1457]: New session 17 of user core. Sep 6 01:23:58.513314 sshd[3940]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:58.516661 systemd[1]: sshd@14-10.200.20.15:22-10.200.16.10:35262.service: Deactivated successfully. Sep 6 01:23:58.517451 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 01:23:58.518028 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. Sep 6 01:23:58.518832 systemd-logind[1457]: Removed session 17. Sep 6 01:23:58.583652 systemd[1]: Started sshd@15-10.200.20.15:22-10.200.16.10:35270.service. Sep 6 01:23:58.993378 sshd[3958]: Accepted publickey for core from 10.200.16.10 port 35270 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:58.994939 sshd[3958]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:58.998776 systemd-logind[1457]: New session 18 of user core. Sep 6 01:23:58.999305 systemd[1]: Started session-18.scope. Sep 6 01:23:59.501454 sshd[3958]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:59.504237 systemd[1]: sshd@15-10.200.20.15:22-10.200.16.10:35270.service: Deactivated successfully. Sep 6 01:23:59.505009 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 01:23:59.505646 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. Sep 6 01:23:59.506601 systemd-logind[1457]: Removed session 18. Sep 6 01:23:59.569434 systemd[1]: Started sshd@16-10.200.20.15:22-10.200.16.10:35274.service. Sep 6 01:23:59.979448 sshd[3968]: Accepted publickey for core from 10.200.16.10 port 35274 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:59.981098 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:59.985567 systemd[1]: Started session-19.scope. Sep 6 01:23:59.986284 systemd-logind[1457]: New session 19 of user core. Sep 6 01:24:00.361341 sshd[3968]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:00.367717 systemd[1]: sshd@16-10.200.20.15:22-10.200.16.10:35274.service: Deactivated successfully. Sep 6 01:24:00.368501 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 01:24:00.369824 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. Sep 6 01:24:00.372510 systemd-logind[1457]: Removed session 19. Sep 6 01:24:05.435570 systemd[1]: Started sshd@17-10.200.20.15:22-10.200.16.10:52242.service. Sep 6 01:24:05.885325 sshd[3982]: Accepted publickey for core from 10.200.16.10 port 52242 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:05.886979 sshd[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:05.890807 systemd-logind[1457]: New session 20 of user core. Sep 6 01:24:05.891405 systemd[1]: Started session-20.scope. Sep 6 01:24:06.292217 sshd[3982]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:06.294676 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. Sep 6 01:24:06.294939 systemd[1]: sshd@17-10.200.20.15:22-10.200.16.10:52242.service: Deactivated successfully. Sep 6 01:24:06.295653 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 01:24:06.296499 systemd-logind[1457]: Removed session 20. Sep 6 01:24:11.381073 systemd[1]: Started sshd@18-10.200.20.15:22-10.200.16.10:35614.service. Sep 6 01:24:11.871142 sshd[3994]: Accepted publickey for core from 10.200.16.10 port 35614 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:11.872047 sshd[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:11.876571 systemd[1]: Started session-21.scope. Sep 6 01:24:11.877757 systemd-logind[1457]: New session 21 of user core. Sep 6 01:24:12.294243 sshd[3994]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:12.297045 systemd[1]: sshd@18-10.200.20.15:22-10.200.16.10:35614.service: Deactivated successfully. Sep 6 01:24:12.297763 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 01:24:12.298729 systemd-logind[1457]: Session 21 logged out. Waiting for processes to exit. Sep 6 01:24:12.299529 systemd-logind[1457]: Removed session 21. Sep 6 01:24:12.356522 systemd[1]: Started sshd@19-10.200.20.15:22-10.200.16.10:35624.service. Sep 6 01:24:12.770638 sshd[4006]: Accepted publickey for core from 10.200.16.10 port 35624 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:12.771941 sshd[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:12.775757 systemd-logind[1457]: New session 22 of user core. Sep 6 01:24:12.776211 systemd[1]: Started session-22.scope. Sep 6 01:24:15.400868 systemd[1]: run-containerd-runc-k8s.io-8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a-runc.cFXt4a.mount: Deactivated successfully. Sep 6 01:24:15.416462 env[1468]: time="2025-09-06T01:24:15.416419573Z" level=info msg="StopContainer for \"3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106\" with timeout 30 (s)" Sep 6 01:24:15.419042 env[1468]: time="2025-09-06T01:24:15.419006702Z" level=info msg="Stop container \"3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106\" with signal terminated" Sep 6 01:24:15.427583 env[1468]: time="2025-09-06T01:24:15.427531054Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 01:24:15.435089 env[1468]: time="2025-09-06T01:24:15.435051801Z" level=info msg="StopContainer for \"8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a\" with timeout 2 (s)" Sep 6 01:24:15.435720 env[1468]: time="2025-09-06T01:24:15.435695524Z" level=info msg="Stop container \"8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a\" with signal terminated" Sep 6 01:24:15.436300 systemd[1]: cri-containerd-3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106.scope: Deactivated successfully. Sep 6 01:24:15.444705 systemd-networkd[1620]: lxc_health: Link DOWN Sep 6 01:24:15.444713 systemd-networkd[1620]: lxc_health: Lost carrier Sep 6 01:24:15.459241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106-rootfs.mount: Deactivated successfully. Sep 6 01:24:15.479605 systemd[1]: cri-containerd-8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a.scope: Deactivated successfully. Sep 6 01:24:15.479971 systemd[1]: cri-containerd-8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a.scope: Consumed 6.265s CPU time. Sep 6 01:24:15.504195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a-rootfs.mount: Deactivated successfully. Sep 6 01:24:15.523869 env[1468]: time="2025-09-06T01:24:15.523825608Z" level=info msg="shim disconnected" id=3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106 Sep 6 01:24:15.524146 env[1468]: time="2025-09-06T01:24:15.524095009Z" level=warning msg="cleaning up after shim disconnected" id=3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106 namespace=k8s.io Sep 6 01:24:15.524293 env[1468]: time="2025-09-06T01:24:15.524270489Z" level=info msg="cleaning up dead shim" Sep 6 01:24:15.524934 env[1468]: time="2025-09-06T01:24:15.524906972Z" level=info msg="shim disconnected" id=8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a Sep 6 01:24:15.525144 env[1468]: time="2025-09-06T01:24:15.525124772Z" level=warning msg="cleaning up after shim disconnected" id=8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a namespace=k8s.io Sep 6 01:24:15.525233 env[1468]: time="2025-09-06T01:24:15.525218573Z" level=info msg="cleaning up dead shim" Sep 6 01:24:15.533018 env[1468]: time="2025-09-06T01:24:15.532978801Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4075 runtime=io.containerd.runc.v2\n" Sep 6 01:24:15.534912 env[1468]: time="2025-09-06T01:24:15.534879688Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4076 runtime=io.containerd.runc.v2\n" Sep 6 01:24:15.541548 env[1468]: time="2025-09-06T01:24:15.541509553Z" level=info msg="StopContainer for \"3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106\" returns successfully" Sep 6 01:24:15.543666 env[1468]: time="2025-09-06T01:24:15.543634320Z" level=info msg="StopPodSandbox for \"70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee\"" Sep 6 01:24:15.543852 env[1468]: time="2025-09-06T01:24:15.543834241Z" level=info msg="Container to stop \"3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:15.544745 env[1468]: time="2025-09-06T01:24:15.544716404Z" level=info msg="StopContainer for \"8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a\" returns successfully" Sep 6 01:24:15.545754 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee-shm.mount: Deactivated successfully. Sep 6 01:24:15.547947 env[1468]: time="2025-09-06T01:24:15.547919136Z" level=info msg="StopPodSandbox for \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\"" Sep 6 01:24:15.548146 env[1468]: time="2025-09-06T01:24:15.548127497Z" level=info msg="Container to stop \"8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:15.549454 env[1468]: time="2025-09-06T01:24:15.549406662Z" level=info msg="Container to stop \"99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:15.549572 env[1468]: time="2025-09-06T01:24:15.549553182Z" level=info msg="Container to stop \"7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:15.549647 env[1468]: time="2025-09-06T01:24:15.549631982Z" level=info msg="Container to stop \"9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:15.549717 env[1468]: time="2025-09-06T01:24:15.549693703Z" level=info msg="Container to stop \"174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:15.553004 systemd[1]: cri-containerd-70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee.scope: Deactivated successfully. Sep 6 01:24:15.561609 systemd[1]: cri-containerd-fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed.scope: Deactivated successfully. Sep 6 01:24:15.594977 env[1468]: time="2025-09-06T01:24:15.594933269Z" level=info msg="shim disconnected" id=70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee Sep 6 01:24:15.595418 env[1468]: time="2025-09-06T01:24:15.595398351Z" level=warning msg="cleaning up after shim disconnected" id=70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee namespace=k8s.io Sep 6 01:24:15.595896 env[1468]: time="2025-09-06T01:24:15.595876272Z" level=info msg="cleaning up dead shim" Sep 6 01:24:15.596618 env[1468]: time="2025-09-06T01:24:15.595756792Z" level=info msg="shim disconnected" id=fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed Sep 6 01:24:15.596737 env[1468]: time="2025-09-06T01:24:15.596720395Z" level=warning msg="cleaning up after shim disconnected" id=fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed namespace=k8s.io Sep 6 01:24:15.596809 env[1468]: time="2025-09-06T01:24:15.596796316Z" level=info msg="cleaning up dead shim" Sep 6 01:24:15.602340 env[1468]: time="2025-09-06T01:24:15.602306816Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4138 runtime=io.containerd.runc.v2\n" Sep 6 01:24:15.602759 env[1468]: time="2025-09-06T01:24:15.602733218Z" level=info msg="TearDown network for sandbox \"70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee\" successfully" Sep 6 01:24:15.602881 env[1468]: time="2025-09-06T01:24:15.602862578Z" level=info msg="StopPodSandbox for \"70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee\" returns successfully" Sep 6 01:24:15.606854 env[1468]: time="2025-09-06T01:24:15.606830513Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4143 runtime=io.containerd.runc.v2\n" Sep 6 01:24:15.608594 env[1468]: time="2025-09-06T01:24:15.608566959Z" level=info msg="TearDown network for sandbox \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\" successfully" Sep 6 01:24:15.608722 env[1468]: time="2025-09-06T01:24:15.608707239Z" level=info msg="StopPodSandbox for \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\" returns successfully" Sep 6 01:24:15.732635 kubelet[2471]: I0906 01:24:15.732519 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-cilium-cgroup\") pod \"7be46365-86ee-458b-93b8-831c3a8a078e\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " Sep 6 01:24:15.732635 kubelet[2471]: I0906 01:24:15.732624 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7be46365-86ee-458b-93b8-831c3a8a078e" (UID: "7be46365-86ee-458b-93b8-831c3a8a078e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:15.733010 kubelet[2471]: I0906 01:24:15.732680 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-cni-path\") pod \"7be46365-86ee-458b-93b8-831c3a8a078e\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " Sep 6 01:24:15.733010 kubelet[2471]: I0906 01:24:15.732711 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7be46365-86ee-458b-93b8-831c3a8a078e-hubble-tls\") pod \"7be46365-86ee-458b-93b8-831c3a8a078e\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " Sep 6 01:24:15.733010 kubelet[2471]: I0906 01:24:15.732746 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-cni-path" (OuterVolumeSpecName: "cni-path") pod "7be46365-86ee-458b-93b8-831c3a8a078e" (UID: "7be46365-86ee-458b-93b8-831c3a8a078e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:15.733093 kubelet[2471]: I0906 01:24:15.733034 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64qqh\" (UniqueName: \"kubernetes.io/projected/7be46365-86ee-458b-93b8-831c3a8a078e-kube-api-access-64qqh\") pod \"7be46365-86ee-458b-93b8-831c3a8a078e\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " Sep 6 01:24:15.733093 kubelet[2471]: I0906 01:24:15.733060 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7be46365-86ee-458b-93b8-831c3a8a078e-clustermesh-secrets\") pod \"7be46365-86ee-458b-93b8-831c3a8a078e\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " Sep 6 01:24:15.733093 kubelet[2471]: I0906 01:24:15.733079 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-hostproc\") pod \"7be46365-86ee-458b-93b8-831c3a8a078e\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " Sep 6 01:24:15.733197 kubelet[2471]: I0906 01:24:15.733097 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-host-proc-sys-net\") pod \"7be46365-86ee-458b-93b8-831c3a8a078e\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " Sep 6 01:24:15.733197 kubelet[2471]: I0906 01:24:15.733138 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-host-proc-sys-kernel\") pod \"7be46365-86ee-458b-93b8-831c3a8a078e\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " Sep 6 01:24:15.733197 kubelet[2471]: I0906 01:24:15.733152 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-etc-cni-netd\") pod \"7be46365-86ee-458b-93b8-831c3a8a078e\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " Sep 6 01:24:15.733197 kubelet[2471]: I0906 01:24:15.733169 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/08b91e49-f176-40ac-bde4-d815d9bd2036-cilium-config-path\") pod \"08b91e49-f176-40ac-bde4-d815d9bd2036\" (UID: \"08b91e49-f176-40ac-bde4-d815d9bd2036\") " Sep 6 01:24:15.733197 kubelet[2471]: I0906 01:24:15.733189 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7be46365-86ee-458b-93b8-831c3a8a078e-cilium-config-path\") pod \"7be46365-86ee-458b-93b8-831c3a8a078e\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " Sep 6 01:24:15.733316 kubelet[2471]: I0906 01:24:15.733206 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-lib-modules\") pod \"7be46365-86ee-458b-93b8-831c3a8a078e\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " Sep 6 01:24:15.733316 kubelet[2471]: I0906 01:24:15.733224 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8jf6\" (UniqueName: \"kubernetes.io/projected/08b91e49-f176-40ac-bde4-d815d9bd2036-kube-api-access-f8jf6\") pod \"08b91e49-f176-40ac-bde4-d815d9bd2036\" (UID: \"08b91e49-f176-40ac-bde4-d815d9bd2036\") " Sep 6 01:24:15.733316 kubelet[2471]: I0906 01:24:15.733237 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-cilium-run\") pod \"7be46365-86ee-458b-93b8-831c3a8a078e\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " Sep 6 01:24:15.733316 kubelet[2471]: I0906 01:24:15.733250 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-xtables-lock\") pod \"7be46365-86ee-458b-93b8-831c3a8a078e\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " Sep 6 01:24:15.733316 kubelet[2471]: I0906 01:24:15.733265 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-bpf-maps\") pod \"7be46365-86ee-458b-93b8-831c3a8a078e\" (UID: \"7be46365-86ee-458b-93b8-831c3a8a078e\") " Sep 6 01:24:15.733316 kubelet[2471]: I0906 01:24:15.733303 2471 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-cilium-cgroup\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:15.733316 kubelet[2471]: I0906 01:24:15.733313 2471 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-cni-path\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:15.733473 kubelet[2471]: I0906 01:24:15.733337 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7be46365-86ee-458b-93b8-831c3a8a078e" (UID: "7be46365-86ee-458b-93b8-831c3a8a078e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:15.737748 kubelet[2471]: I0906 01:24:15.737712 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08b91e49-f176-40ac-bde4-d815d9bd2036-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "08b91e49-f176-40ac-bde4-d815d9bd2036" (UID: "08b91e49-f176-40ac-bde4-d815d9bd2036"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 01:24:15.738562 kubelet[2471]: I0906 01:24:15.738521 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-hostproc" (OuterVolumeSpecName: "hostproc") pod "7be46365-86ee-458b-93b8-831c3a8a078e" (UID: "7be46365-86ee-458b-93b8-831c3a8a078e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:15.738562 kubelet[2471]: I0906 01:24:15.738562 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7be46365-86ee-458b-93b8-831c3a8a078e" (UID: "7be46365-86ee-458b-93b8-831c3a8a078e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:15.738674 kubelet[2471]: I0906 01:24:15.738578 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7be46365-86ee-458b-93b8-831c3a8a078e" (UID: "7be46365-86ee-458b-93b8-831c3a8a078e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:15.738674 kubelet[2471]: I0906 01:24:15.738594 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7be46365-86ee-458b-93b8-831c3a8a078e" (UID: "7be46365-86ee-458b-93b8-831c3a8a078e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:15.738674 kubelet[2471]: I0906 01:24:15.738659 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be46365-86ee-458b-93b8-831c3a8a078e-kube-api-access-64qqh" (OuterVolumeSpecName: "kube-api-access-64qqh") pod "7be46365-86ee-458b-93b8-831c3a8a078e" (UID: "7be46365-86ee-458b-93b8-831c3a8a078e"). InnerVolumeSpecName "kube-api-access-64qqh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:24:15.738744 kubelet[2471]: I0906 01:24:15.738712 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be46365-86ee-458b-93b8-831c3a8a078e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7be46365-86ee-458b-93b8-831c3a8a078e" (UID: "7be46365-86ee-458b-93b8-831c3a8a078e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:24:15.738963 kubelet[2471]: I0906 01:24:15.738931 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7be46365-86ee-458b-93b8-831c3a8a078e" (UID: "7be46365-86ee-458b-93b8-831c3a8a078e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:15.739014 kubelet[2471]: I0906 01:24:15.738966 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7be46365-86ee-458b-93b8-831c3a8a078e" (UID: "7be46365-86ee-458b-93b8-831c3a8a078e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:15.739014 kubelet[2471]: I0906 01:24:15.738983 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7be46365-86ee-458b-93b8-831c3a8a078e" (UID: "7be46365-86ee-458b-93b8-831c3a8a078e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:15.740283 kubelet[2471]: I0906 01:24:15.740243 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7be46365-86ee-458b-93b8-831c3a8a078e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7be46365-86ee-458b-93b8-831c3a8a078e" (UID: "7be46365-86ee-458b-93b8-831c3a8a078e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 01:24:15.741342 kubelet[2471]: I0906 01:24:15.741314 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be46365-86ee-458b-93b8-831c3a8a078e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7be46365-86ee-458b-93b8-831c3a8a078e" (UID: "7be46365-86ee-458b-93b8-831c3a8a078e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 01:24:15.742381 kubelet[2471]: I0906 01:24:15.742356 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08b91e49-f176-40ac-bde4-d815d9bd2036-kube-api-access-f8jf6" (OuterVolumeSpecName: "kube-api-access-f8jf6") pod "08b91e49-f176-40ac-bde4-d815d9bd2036" (UID: "08b91e49-f176-40ac-bde4-d815d9bd2036"). InnerVolumeSpecName "kube-api-access-f8jf6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:24:15.834123 kubelet[2471]: I0906 01:24:15.834071 2471 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-cilium-run\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:15.834311 kubelet[2471]: I0906 01:24:15.834299 2471 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-xtables-lock\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:15.834373 kubelet[2471]: I0906 01:24:15.834361 2471 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-bpf-maps\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:15.834440 kubelet[2471]: I0906 01:24:15.834431 2471 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7be46365-86ee-458b-93b8-831c3a8a078e-hubble-tls\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:15.834500 kubelet[2471]: I0906 01:24:15.834490 2471 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-64qqh\" (UniqueName: \"kubernetes.io/projected/7be46365-86ee-458b-93b8-831c3a8a078e-kube-api-access-64qqh\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:15.834559 kubelet[2471]: I0906 01:24:15.834549 2471 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7be46365-86ee-458b-93b8-831c3a8a078e-clustermesh-secrets\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:15.834618 kubelet[2471]: I0906 01:24:15.834607 2471 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-hostproc\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:15.834680 kubelet[2471]: I0906 01:24:15.834668 2471 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-host-proc-sys-net\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:15.834742 kubelet[2471]: I0906 01:24:15.834731 2471 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:15.834805 kubelet[2471]: I0906 01:24:15.834796 2471 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-etc-cni-netd\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:15.834864 kubelet[2471]: I0906 01:24:15.834855 2471 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/08b91e49-f176-40ac-bde4-d815d9bd2036-cilium-config-path\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:15.834921 kubelet[2471]: I0906 01:24:15.834912 2471 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7be46365-86ee-458b-93b8-831c3a8a078e-cilium-config-path\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:15.834976 kubelet[2471]: I0906 01:24:15.834968 2471 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7be46365-86ee-458b-93b8-831c3a8a078e-lib-modules\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:15.835036 kubelet[2471]: I0906 01:24:15.835024 2471 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f8jf6\" (UniqueName: \"kubernetes.io/projected/08b91e49-f176-40ac-bde4-d815d9bd2036-kube-api-access-f8jf6\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:16.008685 systemd[1]: Removed slice kubepods-besteffort-pod08b91e49_f176_40ac_bde4_d815d9bd2036.slice. Sep 6 01:24:16.012791 systemd[1]: Removed slice kubepods-burstable-pod7be46365_86ee_458b_93b8_831c3a8a078e.slice. Sep 6 01:24:16.012871 systemd[1]: kubepods-burstable-pod7be46365_86ee_458b_93b8_831c3a8a078e.slice: Consumed 6.355s CPU time. Sep 6 01:24:16.394066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee-rootfs.mount: Deactivated successfully. Sep 6 01:24:16.394168 systemd[1]: var-lib-kubelet-pods-08b91e49\x2df176\x2d40ac\x2dbde4\x2dd815d9bd2036-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df8jf6.mount: Deactivated successfully. Sep 6 01:24:16.394231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed-rootfs.mount: Deactivated successfully. Sep 6 01:24:16.394286 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed-shm.mount: Deactivated successfully. Sep 6 01:24:16.394338 systemd[1]: var-lib-kubelet-pods-7be46365\x2d86ee\x2d458b\x2d93b8\x2d831c3a8a078e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d64qqh.mount: Deactivated successfully. Sep 6 01:24:16.394390 systemd[1]: var-lib-kubelet-pods-7be46365\x2d86ee\x2d458b\x2d93b8\x2d831c3a8a078e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 01:24:16.394437 systemd[1]: var-lib-kubelet-pods-7be46365\x2d86ee\x2d458b\x2d93b8\x2d831c3a8a078e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 01:24:16.406850 kubelet[2471]: I0906 01:24:16.406823 2471 scope.go:117] "RemoveContainer" containerID="3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106" Sep 6 01:24:16.411535 env[1468]: time="2025-09-06T01:24:16.411246287Z" level=info msg="RemoveContainer for \"3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106\"" Sep 6 01:24:16.421767 env[1468]: time="2025-09-06T01:24:16.421635244Z" level=info msg="RemoveContainer for \"3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106\" returns successfully" Sep 6 01:24:16.422233 kubelet[2471]: I0906 01:24:16.422210 2471 scope.go:117] "RemoveContainer" containerID="3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106" Sep 6 01:24:16.422719 env[1468]: time="2025-09-06T01:24:16.422633648Z" level=error msg="ContainerStatus for \"3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106\": not found" Sep 6 01:24:16.422895 kubelet[2471]: E0906 01:24:16.422874 2471 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106\": not found" containerID="3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106" Sep 6 01:24:16.423002 kubelet[2471]: I0906 01:24:16.422965 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106"} err="failed to get container status \"3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b8feee63961dba7e467d83dfc0cb64d3458565406be49e60902cdd50cf45106\": not found" Sep 6 01:24:16.423070 kubelet[2471]: I0906 01:24:16.423055 2471 scope.go:117] "RemoveContainer" containerID="8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a" Sep 6 01:24:16.426486 env[1468]: time="2025-09-06T01:24:16.426440182Z" level=info msg="RemoveContainer for \"8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a\"" Sep 6 01:24:16.436208 env[1468]: time="2025-09-06T01:24:16.436059137Z" level=info msg="RemoveContainer for \"8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a\" returns successfully" Sep 6 01:24:16.436501 kubelet[2471]: I0906 01:24:16.436482 2471 scope.go:117] "RemoveContainer" containerID="174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2" Sep 6 01:24:16.439844 env[1468]: time="2025-09-06T01:24:16.439740670Z" level=info msg="RemoveContainer for \"174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2\"" Sep 6 01:24:16.452805 env[1468]: time="2025-09-06T01:24:16.452222435Z" level=info msg="RemoveContainer for \"174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2\" returns successfully" Sep 6 01:24:16.452963 kubelet[2471]: I0906 01:24:16.452760 2471 scope.go:117] "RemoveContainer" containerID="9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f" Sep 6 01:24:16.454914 env[1468]: time="2025-09-06T01:24:16.454878165Z" level=info msg="RemoveContainer for \"9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f\"" Sep 6 01:24:16.470680 env[1468]: time="2025-09-06T01:24:16.470629662Z" level=info msg="RemoveContainer for \"9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f\" returns successfully" Sep 6 01:24:16.470972 kubelet[2471]: I0906 01:24:16.470918 2471 scope.go:117] "RemoveContainer" containerID="7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307" Sep 6 01:24:16.472297 env[1468]: time="2025-09-06T01:24:16.472258708Z" level=info msg="RemoveContainer for \"7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307\"" Sep 6 01:24:16.491681 env[1468]: time="2025-09-06T01:24:16.491618898Z" level=info msg="RemoveContainer for \"7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307\" returns successfully" Sep 6 01:24:16.491997 kubelet[2471]: I0906 01:24:16.491953 2471 scope.go:117] "RemoveContainer" containerID="99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293" Sep 6 01:24:16.493435 env[1468]: time="2025-09-06T01:24:16.493184504Z" level=info msg="RemoveContainer for \"99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293\"" Sep 6 01:24:16.506128 env[1468]: time="2025-09-06T01:24:16.505988550Z" level=info msg="RemoveContainer for \"99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293\" returns successfully" Sep 6 01:24:16.508893 kubelet[2471]: I0906 01:24:16.506472 2471 scope.go:117] "RemoveContainer" containerID="8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a" Sep 6 01:24:16.509096 env[1468]: time="2025-09-06T01:24:16.506748033Z" level=error msg="ContainerStatus for \"8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a\": not found" Sep 6 01:24:16.509381 kubelet[2471]: E0906 01:24:16.509353 2471 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a\": not found" containerID="8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a" Sep 6 01:24:16.509490 kubelet[2471]: I0906 01:24:16.509467 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a"} err="failed to get container status \"8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a\": rpc error: code = NotFound desc = an error occurred when try to find container \"8729b334c4002ecc1b00efbd886fcb8d8b6990a088b729066e800d290bcc6f5a\": not found" Sep 6 01:24:16.509563 kubelet[2471]: I0906 01:24:16.509552 2471 scope.go:117] "RemoveContainer" containerID="174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2" Sep 6 01:24:16.509958 env[1468]: time="2025-09-06T01:24:16.509895444Z" level=error msg="ContainerStatus for \"174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2\": not found" Sep 6 01:24:16.510161 kubelet[2471]: E0906 01:24:16.510141 2471 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2\": not found" containerID="174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2" Sep 6 01:24:16.510261 kubelet[2471]: I0906 01:24:16.510241 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2"} err="failed to get container status \"174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"174c1ec3f7afb3f96ec473f1bea23202e6d08ca997afffd9798e01ef88def2b2\": not found" Sep 6 01:24:16.510322 kubelet[2471]: I0906 01:24:16.510311 2471 scope.go:117] "RemoveContainer" containerID="9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f" Sep 6 01:24:16.510604 env[1468]: time="2025-09-06T01:24:16.510532846Z" level=error msg="ContainerStatus for \"9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f\": not found" Sep 6 01:24:16.510747 kubelet[2471]: E0906 01:24:16.510729 2471 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f\": not found" containerID="9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f" Sep 6 01:24:16.510820 kubelet[2471]: I0906 01:24:16.510804 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f"} err="failed to get container status \"9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f\": rpc error: code = NotFound desc = an error occurred when try to find container \"9349ce6e4b5e304ca16f2eb85d7e250d0e1453e52e2d18e9ed7a3939672a702f\": not found" Sep 6 01:24:16.510881 kubelet[2471]: I0906 01:24:16.510871 2471 scope.go:117] "RemoveContainer" containerID="7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307" Sep 6 01:24:16.511225 env[1468]: time="2025-09-06T01:24:16.511168409Z" level=error msg="ContainerStatus for \"7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307\": not found" Sep 6 01:24:16.511387 kubelet[2471]: E0906 01:24:16.511371 2471 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307\": not found" containerID="7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307" Sep 6 01:24:16.511467 kubelet[2471]: I0906 01:24:16.511451 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307"} err="failed to get container status \"7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307\": rpc error: code = NotFound desc = an error occurred when try to find container \"7419ad7d140319e8675258f144da46a89c3e1b16b804caac23a9bcb314d8b307\": not found" Sep 6 01:24:16.511525 kubelet[2471]: I0906 01:24:16.511515 2471 scope.go:117] "RemoveContainer" containerID="99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293" Sep 6 01:24:16.511769 env[1468]: time="2025-09-06T01:24:16.511722131Z" level=error msg="ContainerStatus for \"99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293\": not found" Sep 6 01:24:16.511906 kubelet[2471]: E0906 01:24:16.511892 2471 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293\": not found" containerID="99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293" Sep 6 01:24:16.512004 kubelet[2471]: I0906 01:24:16.511987 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293"} err="failed to get container status \"99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293\": rpc error: code = NotFound desc = an error occurred when try to find container \"99b954add25a70f2304aa9bd9ac80cace928655a26b406ef35ab310ef6b78293\": not found" Sep 6 01:24:17.412932 sshd[4006]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:17.416169 systemd[1]: sshd@19-10.200.20.15:22-10.200.16.10:35624.service: Deactivated successfully. Sep 6 01:24:17.417579 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 01:24:17.417752 systemd[1]: session-22.scope: Consumed 1.741s CPU time. Sep 6 01:24:17.418471 systemd-logind[1457]: Session 22 logged out. Waiting for processes to exit. Sep 6 01:24:17.419351 systemd-logind[1457]: Removed session 22. Sep 6 01:24:17.482853 systemd[1]: Started sshd@20-10.200.20.15:22-10.200.16.10:35634.service. Sep 6 01:24:17.894499 sshd[4171]: Accepted publickey for core from 10.200.16.10 port 35634 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:17.895902 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:17.900009 systemd-logind[1457]: New session 23 of user core. Sep 6 01:24:17.900561 systemd[1]: Started session-23.scope. Sep 6 01:24:18.004602 kubelet[2471]: I0906 01:24:18.004565 2471 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b91e49-f176-40ac-bde4-d815d9bd2036" path="/var/lib/kubelet/pods/08b91e49-f176-40ac-bde4-d815d9bd2036/volumes" Sep 6 01:24:18.005402 kubelet[2471]: I0906 01:24:18.005381 2471 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be46365-86ee-458b-93b8-831c3a8a078e" path="/var/lib/kubelet/pods/7be46365-86ee-458b-93b8-831c3a8a078e/volumes" Sep 6 01:24:19.439172 systemd[1]: Created slice kubepods-burstable-pod81a2e1d4_9c71_43a6_8e74_16407e235d3e.slice. Sep 6 01:24:19.450603 sshd[4171]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:19.453400 systemd-logind[1457]: Session 23 logged out. Waiting for processes to exit. Sep 6 01:24:19.453560 systemd[1]: sshd@20-10.200.20.15:22-10.200.16.10:35634.service: Deactivated successfully. Sep 6 01:24:19.454254 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 01:24:19.454407 systemd[1]: session-23.scope: Consumed 1.149s CPU time. Sep 6 01:24:19.454964 systemd-logind[1457]: Removed session 23. Sep 6 01:24:19.519422 systemd[1]: Started sshd@21-10.200.20.15:22-10.200.16.10:35636.service. Sep 6 01:24:19.553785 kubelet[2471]: I0906 01:24:19.553347 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81a2e1d4-9c71-43a6-8e74-16407e235d3e-clustermesh-secrets\") pod \"cilium-brlrg\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " pod="kube-system/cilium-brlrg" Sep 6 01:24:19.553785 kubelet[2471]: I0906 01:24:19.553386 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-host-proc-sys-kernel\") pod \"cilium-brlrg\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " pod="kube-system/cilium-brlrg" Sep 6 01:24:19.553785 kubelet[2471]: I0906 01:24:19.553418 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-bpf-maps\") pod \"cilium-brlrg\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " pod="kube-system/cilium-brlrg" Sep 6 01:24:19.553785 kubelet[2471]: I0906 01:24:19.553434 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cilium-cgroup\") pod \"cilium-brlrg\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " pod="kube-system/cilium-brlrg" Sep 6 01:24:19.553785 kubelet[2471]: I0906 01:24:19.553454 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cni-path\") pod \"cilium-brlrg\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " pod="kube-system/cilium-brlrg" Sep 6 01:24:19.553785 kubelet[2471]: I0906 01:24:19.553468 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-etc-cni-netd\") pod \"cilium-brlrg\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " pod="kube-system/cilium-brlrg" Sep 6 01:24:19.554339 kubelet[2471]: I0906 01:24:19.553501 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-lib-modules\") pod \"cilium-brlrg\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " pod="kube-system/cilium-brlrg" Sep 6 01:24:19.554339 kubelet[2471]: I0906 01:24:19.553517 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cilium-ipsec-secrets\") pod \"cilium-brlrg\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " pod="kube-system/cilium-brlrg" Sep 6 01:24:19.554339 kubelet[2471]: I0906 01:24:19.553531 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-host-proc-sys-net\") pod \"cilium-brlrg\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " pod="kube-system/cilium-brlrg" Sep 6 01:24:19.554339 kubelet[2471]: I0906 01:24:19.553546 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cilium-config-path\") pod \"cilium-brlrg\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " pod="kube-system/cilium-brlrg" Sep 6 01:24:19.554339 kubelet[2471]: I0906 01:24:19.553572 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81a2e1d4-9c71-43a6-8e74-16407e235d3e-hubble-tls\") pod \"cilium-brlrg\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " pod="kube-system/cilium-brlrg" Sep 6 01:24:19.554339 kubelet[2471]: I0906 01:24:19.553592 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-xtables-lock\") pod \"cilium-brlrg\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " pod="kube-system/cilium-brlrg" Sep 6 01:24:19.554482 kubelet[2471]: I0906 01:24:19.553606 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz5bg\" (UniqueName: \"kubernetes.io/projected/81a2e1d4-9c71-43a6-8e74-16407e235d3e-kube-api-access-qz5bg\") pod \"cilium-brlrg\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " pod="kube-system/cilium-brlrg" Sep 6 01:24:19.554482 kubelet[2471]: I0906 01:24:19.553622 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cilium-run\") pod \"cilium-brlrg\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " pod="kube-system/cilium-brlrg" Sep 6 01:24:19.554482 kubelet[2471]: I0906 01:24:19.553650 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-hostproc\") pod \"cilium-brlrg\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " pod="kube-system/cilium-brlrg" Sep 6 01:24:19.743548 env[1468]: time="2025-09-06T01:24:19.743160439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-brlrg,Uid:81a2e1d4-9c71-43a6-8e74-16407e235d3e,Namespace:kube-system,Attempt:0,}" Sep 6 01:24:19.782162 env[1468]: time="2025-09-06T01:24:19.782070894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:24:19.782162 env[1468]: time="2025-09-06T01:24:19.782124614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:24:19.782162 env[1468]: time="2025-09-06T01:24:19.782135054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:24:19.782657 env[1468]: time="2025-09-06T01:24:19.782600496Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f0f31f966dd71ca484df407a14f8e27a727e3252de204f51b6b09775dca1af4 pid=4197 runtime=io.containerd.runc.v2 Sep 6 01:24:19.797629 systemd[1]: Started cri-containerd-0f0f31f966dd71ca484df407a14f8e27a727e3252de204f51b6b09775dca1af4.scope. Sep 6 01:24:19.826923 env[1468]: time="2025-09-06T01:24:19.826872969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-brlrg,Uid:81a2e1d4-9c71-43a6-8e74-16407e235d3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f0f31f966dd71ca484df407a14f8e27a727e3252de204f51b6b09775dca1af4\"" Sep 6 01:24:19.837204 env[1468]: time="2025-09-06T01:24:19.837158405Z" level=info msg="CreateContainer within sandbox \"0f0f31f966dd71ca484df407a14f8e27a727e3252de204f51b6b09775dca1af4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 01:24:19.873817 env[1468]: time="2025-09-06T01:24:19.873768452Z" level=info msg="CreateContainer within sandbox \"0f0f31f966dd71ca484df407a14f8e27a727e3252de204f51b6b09775dca1af4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dc47edca0f823921452f37fcbb7f04050fe6bf2905b5b6ae6d2e9addf4819cdb\"" Sep 6 01:24:19.874516 env[1468]: time="2025-09-06T01:24:19.874486294Z" level=info msg="StartContainer for \"dc47edca0f823921452f37fcbb7f04050fe6bf2905b5b6ae6d2e9addf4819cdb\"" Sep 6 01:24:19.889680 systemd[1]: Started cri-containerd-dc47edca0f823921452f37fcbb7f04050fe6bf2905b5b6ae6d2e9addf4819cdb.scope. Sep 6 01:24:19.900569 systemd[1]: cri-containerd-dc47edca0f823921452f37fcbb7f04050fe6bf2905b5b6ae6d2e9addf4819cdb.scope: Deactivated successfully. Sep 6 01:24:19.939963 sshd[4182]: Accepted publickey for core from 10.200.16.10 port 35636 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:19.940557 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:19.945199 systemd[1]: Started session-24.scope. Sep 6 01:24:19.946734 systemd-logind[1457]: New session 24 of user core. Sep 6 01:24:19.951469 env[1468]: time="2025-09-06T01:24:19.950577118Z" level=info msg="shim disconnected" id=dc47edca0f823921452f37fcbb7f04050fe6bf2905b5b6ae6d2e9addf4819cdb Sep 6 01:24:19.951469 env[1468]: time="2025-09-06T01:24:19.950770639Z" level=warning msg="cleaning up after shim disconnected" id=dc47edca0f823921452f37fcbb7f04050fe6bf2905b5b6ae6d2e9addf4819cdb namespace=k8s.io Sep 6 01:24:19.951469 env[1468]: time="2025-09-06T01:24:19.950782359Z" level=info msg="cleaning up dead shim" Sep 6 01:24:19.962522 env[1468]: time="2025-09-06T01:24:19.962472759Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4257 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T01:24:19Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/dc47edca0f823921452f37fcbb7f04050fe6bf2905b5b6ae6d2e9addf4819cdb/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 01:24:19.963008 env[1468]: time="2025-09-06T01:24:19.962909761Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Sep 6 01:24:19.963224 env[1468]: time="2025-09-06T01:24:19.963180082Z" level=error msg="Failed to pipe stdout of container \"dc47edca0f823921452f37fcbb7f04050fe6bf2905b5b6ae6d2e9addf4819cdb\"" error="reading from a closed fifo" Sep 6 01:24:19.964201 env[1468]: time="2025-09-06T01:24:19.964164565Z" level=error msg="Failed to pipe stderr of container \"dc47edca0f823921452f37fcbb7f04050fe6bf2905b5b6ae6d2e9addf4819cdb\"" error="reading from a closed fifo" Sep 6 01:24:19.969165 env[1468]: time="2025-09-06T01:24:19.969053782Z" level=error msg="StartContainer for \"dc47edca0f823921452f37fcbb7f04050fe6bf2905b5b6ae6d2e9addf4819cdb\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 01:24:19.969830 kubelet[2471]: E0906 01:24:19.969393 2471 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="dc47edca0f823921452f37fcbb7f04050fe6bf2905b5b6ae6d2e9addf4819cdb" Sep 6 01:24:19.969830 kubelet[2471]: E0906 01:24:19.969554 2471 kuberuntime_manager.go:1358] "Unhandled Error" err=< Sep 6 01:24:19.969830 kubelet[2471]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 01:24:19.969830 kubelet[2471]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 01:24:19.969830 kubelet[2471]: rm /hostbin/cilium-mount Sep 6 01:24:19.970068 kubelet[2471]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qz5bg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-brlrg_kube-system(81a2e1d4-9c71-43a6-8e74-16407e235d3e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 01:24:19.970068 kubelet[2471]: > logger="UnhandledError" Sep 6 01:24:19.971011 kubelet[2471]: E0906 01:24:19.970931 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-brlrg" podUID="81a2e1d4-9c71-43a6-8e74-16407e235d3e" Sep 6 01:24:20.001923 env[1468]: time="2025-09-06T01:24:20.001796976Z" level=info msg="StopPodSandbox for \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\"" Sep 6 01:24:20.002457 env[1468]: time="2025-09-06T01:24:20.002144977Z" level=info msg="TearDown network for sandbox \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\" successfully" Sep 6 01:24:20.002590 env[1468]: time="2025-09-06T01:24:20.002568938Z" level=info msg="StopPodSandbox for \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\" returns successfully" Sep 6 01:24:20.005663 env[1468]: time="2025-09-06T01:24:20.005628309Z" level=info msg="RemovePodSandbox for \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\"" Sep 6 01:24:20.006234 env[1468]: time="2025-09-06T01:24:20.006171471Z" level=info msg="Forcibly stopping sandbox \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\"" Sep 6 01:24:20.006410 env[1468]: time="2025-09-06T01:24:20.006389671Z" level=info msg="TearDown network for sandbox \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\" successfully" Sep 6 01:24:20.020290 env[1468]: time="2025-09-06T01:24:20.020242199Z" level=info msg="RemovePodSandbox \"fdec800e1e6d680f437b01cccdcc70f1f02e7cf3c62a04e1375c0092eb69ffed\" returns successfully" Sep 6 01:24:20.021023 env[1468]: time="2025-09-06T01:24:20.020995761Z" level=info msg="StopPodSandbox for \"70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee\"" Sep 6 01:24:20.021271 env[1468]: time="2025-09-06T01:24:20.021230242Z" level=info msg="TearDown network for sandbox \"70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee\" successfully" Sep 6 01:24:20.021358 env[1468]: time="2025-09-06T01:24:20.021342242Z" level=info msg="StopPodSandbox for \"70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee\" returns successfully" Sep 6 01:24:20.021697 env[1468]: time="2025-09-06T01:24:20.021670003Z" level=info msg="RemovePodSandbox for \"70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee\"" Sep 6 01:24:20.021753 env[1468]: time="2025-09-06T01:24:20.021700924Z" level=info msg="Forcibly stopping sandbox \"70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee\"" Sep 6 01:24:20.021780 env[1468]: time="2025-09-06T01:24:20.021761044Z" level=info msg="TearDown network for sandbox \"70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee\" successfully" Sep 6 01:24:20.029467 env[1468]: time="2025-09-06T01:24:20.029380510Z" level=info msg="RemovePodSandbox \"70bf837af320cd1596a5140674d275854961adc2b7463d562b5ce72038836eee\" returns successfully" Sep 6 01:24:20.095851 kubelet[2471]: E0906 01:24:20.095807 2471 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 01:24:20.334913 sshd[4182]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:20.338212 systemd[1]: sshd@21-10.200.20.15:22-10.200.16.10:35636.service: Deactivated successfully. Sep 6 01:24:20.338976 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 01:24:20.339578 systemd-logind[1457]: Session 24 logged out. Waiting for processes to exit. Sep 6 01:24:20.340436 systemd-logind[1457]: Removed session 24. Sep 6 01:24:20.422125 systemd[1]: Started sshd@22-10.200.20.15:22-10.200.16.10:42606.service. Sep 6 01:24:20.431860 env[1468]: time="2025-09-06T01:24:20.431822004Z" level=info msg="StopPodSandbox for \"0f0f31f966dd71ca484df407a14f8e27a727e3252de204f51b6b09775dca1af4\"" Sep 6 01:24:20.432150 env[1468]: time="2025-09-06T01:24:20.432126405Z" level=info msg="Container to stop \"dc47edca0f823921452f37fcbb7f04050fe6bf2905b5b6ae6d2e9addf4819cdb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:20.452015 systemd[1]: cri-containerd-0f0f31f966dd71ca484df407a14f8e27a727e3252de204f51b6b09775dca1af4.scope: Deactivated successfully. Sep 6 01:24:20.522566 env[1468]: time="2025-09-06T01:24:20.522517674Z" level=info msg="shim disconnected" id=0f0f31f966dd71ca484df407a14f8e27a727e3252de204f51b6b09775dca1af4 Sep 6 01:24:20.522768 env[1468]: time="2025-09-06T01:24:20.522579994Z" level=warning msg="cleaning up after shim disconnected" id=0f0f31f966dd71ca484df407a14f8e27a727e3252de204f51b6b09775dca1af4 namespace=k8s.io Sep 6 01:24:20.522768 env[1468]: time="2025-09-06T01:24:20.522590354Z" level=info msg="cleaning up dead shim" Sep 6 01:24:20.529584 env[1468]: time="2025-09-06T01:24:20.529533298Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4299 runtime=io.containerd.runc.v2\n" Sep 6 01:24:20.529869 env[1468]: time="2025-09-06T01:24:20.529836179Z" level=info msg="TearDown network for sandbox \"0f0f31f966dd71ca484df407a14f8e27a727e3252de204f51b6b09775dca1af4\" successfully" Sep 6 01:24:20.529905 env[1468]: time="2025-09-06T01:24:20.529865139Z" level=info msg="StopPodSandbox for \"0f0f31f966dd71ca484df407a14f8e27a727e3252de204f51b6b09775dca1af4\" returns successfully" Sep 6 01:24:20.661214 kubelet[2471]: I0906 01:24:20.661091 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-host-proc-sys-net\") pod \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " Sep 6 01:24:20.661214 kubelet[2471]: I0906 01:24:20.661149 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cni-path\") pod \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " Sep 6 01:24:20.661214 kubelet[2471]: I0906 01:24:20.661168 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-xtables-lock\") pod \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " Sep 6 01:24:20.661214 kubelet[2471]: I0906 01:24:20.661198 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-hostproc\") pod \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " Sep 6 01:24:20.661660 kubelet[2471]: I0906 01:24:20.661220 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81a2e1d4-9c71-43a6-8e74-16407e235d3e-clustermesh-secrets\") pod \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " Sep 6 01:24:20.661660 kubelet[2471]: I0906 01:24:20.661236 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-host-proc-sys-kernel\") pod \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " Sep 6 01:24:20.661660 kubelet[2471]: I0906 01:24:20.661251 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-lib-modules\") pod \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " Sep 6 01:24:20.661660 kubelet[2471]: I0906 01:24:20.661275 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cilium-cgroup\") pod \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " Sep 6 01:24:20.661660 kubelet[2471]: I0906 01:24:20.661293 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-etc-cni-netd\") pod \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " Sep 6 01:24:20.661660 kubelet[2471]: I0906 01:24:20.661307 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cilium-ipsec-secrets\") pod \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " Sep 6 01:24:20.661827 kubelet[2471]: I0906 01:24:20.661323 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cilium-config-path\") pod \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " Sep 6 01:24:20.661827 kubelet[2471]: I0906 01:24:20.661341 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz5bg\" (UniqueName: \"kubernetes.io/projected/81a2e1d4-9c71-43a6-8e74-16407e235d3e-kube-api-access-qz5bg\") pod \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " Sep 6 01:24:20.661827 kubelet[2471]: I0906 01:24:20.661375 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cilium-run\") pod \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " Sep 6 01:24:20.661827 kubelet[2471]: I0906 01:24:20.661392 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-bpf-maps\") pod \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " Sep 6 01:24:20.661827 kubelet[2471]: I0906 01:24:20.661408 2471 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81a2e1d4-9c71-43a6-8e74-16407e235d3e-hubble-tls\") pod \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\" (UID: \"81a2e1d4-9c71-43a6-8e74-16407e235d3e\") " Sep 6 01:24:20.665352 kubelet[2471]: I0906 01:24:20.661975 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "81a2e1d4-9c71-43a6-8e74-16407e235d3e" (UID: "81a2e1d4-9c71-43a6-8e74-16407e235d3e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:20.665352 kubelet[2471]: I0906 01:24:20.662019 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "81a2e1d4-9c71-43a6-8e74-16407e235d3e" (UID: "81a2e1d4-9c71-43a6-8e74-16407e235d3e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:20.665352 kubelet[2471]: I0906 01:24:20.662037 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cni-path" (OuterVolumeSpecName: "cni-path") pod "81a2e1d4-9c71-43a6-8e74-16407e235d3e" (UID: "81a2e1d4-9c71-43a6-8e74-16407e235d3e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:20.665352 kubelet[2471]: I0906 01:24:20.662054 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "81a2e1d4-9c71-43a6-8e74-16407e235d3e" (UID: "81a2e1d4-9c71-43a6-8e74-16407e235d3e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:20.665352 kubelet[2471]: I0906 01:24:20.662067 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-hostproc" (OuterVolumeSpecName: "hostproc") pod "81a2e1d4-9c71-43a6-8e74-16407e235d3e" (UID: "81a2e1d4-9c71-43a6-8e74-16407e235d3e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:20.664024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f0f31f966dd71ca484df407a14f8e27a727e3252de204f51b6b09775dca1af4-rootfs.mount: Deactivated successfully. Sep 6 01:24:20.665715 kubelet[2471]: I0906 01:24:20.665031 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "81a2e1d4-9c71-43a6-8e74-16407e235d3e" (UID: "81a2e1d4-9c71-43a6-8e74-16407e235d3e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:20.665715 kubelet[2471]: I0906 01:24:20.665071 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "81a2e1d4-9c71-43a6-8e74-16407e235d3e" (UID: "81a2e1d4-9c71-43a6-8e74-16407e235d3e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:20.664141 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f0f31f966dd71ca484df407a14f8e27a727e3252de204f51b6b09775dca1af4-shm.mount: Deactivated successfully. Sep 6 01:24:20.667675 kubelet[2471]: I0906 01:24:20.667617 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "81a2e1d4-9c71-43a6-8e74-16407e235d3e" (UID: "81a2e1d4-9c71-43a6-8e74-16407e235d3e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:20.667829 kubelet[2471]: I0906 01:24:20.667814 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "81a2e1d4-9c71-43a6-8e74-16407e235d3e" (UID: "81a2e1d4-9c71-43a6-8e74-16407e235d3e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:20.669978 kubelet[2471]: I0906 01:24:20.669766 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "81a2e1d4-9c71-43a6-8e74-16407e235d3e" (UID: "81a2e1d4-9c71-43a6-8e74-16407e235d3e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 01:24:20.669978 kubelet[2471]: I0906 01:24:20.669831 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "81a2e1d4-9c71-43a6-8e74-16407e235d3e" (UID: "81a2e1d4-9c71-43a6-8e74-16407e235d3e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:20.671825 systemd[1]: var-lib-kubelet-pods-81a2e1d4\x2d9c71\x2d43a6\x2d8e74\x2d16407e235d3e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 01:24:20.673889 systemd[1]: var-lib-kubelet-pods-81a2e1d4\x2d9c71\x2d43a6\x2d8e74\x2d16407e235d3e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 01:24:20.676671 systemd[1]: var-lib-kubelet-pods-81a2e1d4\x2d9c71\x2d43a6\x2d8e74\x2d16407e235d3e-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 01:24:20.678194 kubelet[2471]: I0906 01:24:20.678154 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "81a2e1d4-9c71-43a6-8e74-16407e235d3e" (UID: "81a2e1d4-9c71-43a6-8e74-16407e235d3e"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 01:24:20.678285 kubelet[2471]: I0906 01:24:20.678262 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a2e1d4-9c71-43a6-8e74-16407e235d3e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "81a2e1d4-9c71-43a6-8e74-16407e235d3e" (UID: "81a2e1d4-9c71-43a6-8e74-16407e235d3e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:24:20.678597 kubelet[2471]: I0906 01:24:20.678570 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a2e1d4-9c71-43a6-8e74-16407e235d3e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "81a2e1d4-9c71-43a6-8e74-16407e235d3e" (UID: "81a2e1d4-9c71-43a6-8e74-16407e235d3e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 01:24:20.681961 systemd[1]: var-lib-kubelet-pods-81a2e1d4\x2d9c71\x2d43a6\x2d8e74\x2d16407e235d3e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqz5bg.mount: Deactivated successfully. Sep 6 01:24:20.683191 kubelet[2471]: I0906 01:24:20.683157 2471 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a2e1d4-9c71-43a6-8e74-16407e235d3e-kube-api-access-qz5bg" (OuterVolumeSpecName: "kube-api-access-qz5bg") pod "81a2e1d4-9c71-43a6-8e74-16407e235d3e" (UID: "81a2e1d4-9c71-43a6-8e74-16407e235d3e"). InnerVolumeSpecName "kube-api-access-qz5bg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:24:20.762502 kubelet[2471]: I0906 01:24:20.762461 2471 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-host-proc-sys-net\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:20.762502 kubelet[2471]: I0906 01:24:20.762498 2471 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cni-path\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:20.762502 kubelet[2471]: I0906 01:24:20.762508 2471 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-xtables-lock\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:20.762696 kubelet[2471]: I0906 01:24:20.762518 2471 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-hostproc\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:20.762696 kubelet[2471]: I0906 01:24:20.762528 2471 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81a2e1d4-9c71-43a6-8e74-16407e235d3e-clustermesh-secrets\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:20.762696 kubelet[2471]: I0906 01:24:20.762536 2471 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:20.762696 kubelet[2471]: I0906 01:24:20.762545 2471 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-lib-modules\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:20.762696 kubelet[2471]: I0906 01:24:20.762553 2471 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cilium-cgroup\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:20.762696 kubelet[2471]: I0906 01:24:20.762561 2471 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-etc-cni-netd\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:20.762696 kubelet[2471]: I0906 01:24:20.762569 2471 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:20.762696 kubelet[2471]: I0906 01:24:20.762578 2471 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cilium-config-path\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:20.762875 kubelet[2471]: I0906 01:24:20.762586 2471 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qz5bg\" (UniqueName: \"kubernetes.io/projected/81a2e1d4-9c71-43a6-8e74-16407e235d3e-kube-api-access-qz5bg\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:20.762875 kubelet[2471]: I0906 01:24:20.762594 2471 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-cilium-run\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:20.762875 kubelet[2471]: I0906 01:24:20.762602 2471 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81a2e1d4-9c71-43a6-8e74-16407e235d3e-bpf-maps\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:20.762875 kubelet[2471]: I0906 01:24:20.762610 2471 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81a2e1d4-9c71-43a6-8e74-16407e235d3e-hubble-tls\") on node \"ci-3510.3.8-n-4d72badcbe\" DevicePath \"\"" Sep 6 01:24:20.909375 sshd[4280]: Accepted publickey for core from 10.200.16.10 port 42606 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:20.910634 sshd[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:20.914896 systemd[1]: Started session-25.scope. Sep 6 01:24:20.915237 systemd-logind[1457]: New session 25 of user core. Sep 6 01:24:21.434399 kubelet[2471]: I0906 01:24:21.434367 2471 scope.go:117] "RemoveContainer" containerID="dc47edca0f823921452f37fcbb7f04050fe6bf2905b5b6ae6d2e9addf4819cdb" Sep 6 01:24:21.435487 env[1468]: time="2025-09-06T01:24:21.435435370Z" level=info msg="RemoveContainer for \"dc47edca0f823921452f37fcbb7f04050fe6bf2905b5b6ae6d2e9addf4819cdb\"" Sep 6 01:24:21.439639 systemd[1]: Removed slice kubepods-burstable-pod81a2e1d4_9c71_43a6_8e74_16407e235d3e.slice. Sep 6 01:24:21.449180 env[1468]: time="2025-09-06T01:24:21.449134536Z" level=info msg="RemoveContainer for \"dc47edca0f823921452f37fcbb7f04050fe6bf2905b5b6ae6d2e9addf4819cdb\" returns successfully" Sep 6 01:24:21.522061 systemd[1]: Created slice kubepods-burstable-pod10b9976f_f7b3_41f5_9fc5_b8beb0c16edd.slice. Sep 6 01:24:21.565725 kubelet[2471]: I0906 01:24:21.565674 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10b9976f-f7b3-41f5-9fc5-b8beb0c16edd-cilium-run\") pod \"cilium-r9pk2\" (UID: \"10b9976f-f7b3-41f5-9fc5-b8beb0c16edd\") " pod="kube-system/cilium-r9pk2" Sep 6 01:24:21.565725 kubelet[2471]: I0906 01:24:21.565725 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10b9976f-f7b3-41f5-9fc5-b8beb0c16edd-cilium-config-path\") pod \"cilium-r9pk2\" (UID: \"10b9976f-f7b3-41f5-9fc5-b8beb0c16edd\") " pod="kube-system/cilium-r9pk2" Sep 6 01:24:21.565926 kubelet[2471]: I0906 01:24:21.565746 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzw6g\" (UniqueName: \"kubernetes.io/projected/10b9976f-f7b3-41f5-9fc5-b8beb0c16edd-kube-api-access-gzw6g\") pod \"cilium-r9pk2\" (UID: \"10b9976f-f7b3-41f5-9fc5-b8beb0c16edd\") " pod="kube-system/cilium-r9pk2" Sep 6 01:24:21.565926 kubelet[2471]: I0906 01:24:21.565769 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10b9976f-f7b3-41f5-9fc5-b8beb0c16edd-hostproc\") pod \"cilium-r9pk2\" (UID: \"10b9976f-f7b3-41f5-9fc5-b8beb0c16edd\") " pod="kube-system/cilium-r9pk2" Sep 6 01:24:21.565926 kubelet[2471]: I0906 01:24:21.565785 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10b9976f-f7b3-41f5-9fc5-b8beb0c16edd-etc-cni-netd\") pod \"cilium-r9pk2\" (UID: \"10b9976f-f7b3-41f5-9fc5-b8beb0c16edd\") " pod="kube-system/cilium-r9pk2" Sep 6 01:24:21.565926 kubelet[2471]: I0906 01:24:21.565801 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10b9976f-f7b3-41f5-9fc5-b8beb0c16edd-host-proc-sys-net\") pod \"cilium-r9pk2\" (UID: \"10b9976f-f7b3-41f5-9fc5-b8beb0c16edd\") " pod="kube-system/cilium-r9pk2" Sep 6 01:24:21.565926 kubelet[2471]: I0906 01:24:21.565816 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10b9976f-f7b3-41f5-9fc5-b8beb0c16edd-hubble-tls\") pod \"cilium-r9pk2\" (UID: \"10b9976f-f7b3-41f5-9fc5-b8beb0c16edd\") " pod="kube-system/cilium-r9pk2" Sep 6 01:24:21.565926 kubelet[2471]: I0906 01:24:21.565838 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10b9976f-f7b3-41f5-9fc5-b8beb0c16edd-cni-path\") pod \"cilium-r9pk2\" (UID: \"10b9976f-f7b3-41f5-9fc5-b8beb0c16edd\") " pod="kube-system/cilium-r9pk2" Sep 6 01:24:21.566072 kubelet[2471]: I0906 01:24:21.565854 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10b9976f-f7b3-41f5-9fc5-b8beb0c16edd-lib-modules\") pod \"cilium-r9pk2\" (UID: \"10b9976f-f7b3-41f5-9fc5-b8beb0c16edd\") " pod="kube-system/cilium-r9pk2" Sep 6 01:24:21.566072 kubelet[2471]: I0906 01:24:21.565867 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10b9976f-f7b3-41f5-9fc5-b8beb0c16edd-host-proc-sys-kernel\") pod \"cilium-r9pk2\" (UID: \"10b9976f-f7b3-41f5-9fc5-b8beb0c16edd\") " pod="kube-system/cilium-r9pk2" Sep 6 01:24:21.566072 kubelet[2471]: I0906 01:24:21.565887 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10b9976f-f7b3-41f5-9fc5-b8beb0c16edd-bpf-maps\") pod \"cilium-r9pk2\" (UID: \"10b9976f-f7b3-41f5-9fc5-b8beb0c16edd\") " pod="kube-system/cilium-r9pk2" Sep 6 01:24:21.566072 kubelet[2471]: I0906 01:24:21.565901 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10b9976f-f7b3-41f5-9fc5-b8beb0c16edd-cilium-cgroup\") pod \"cilium-r9pk2\" (UID: \"10b9976f-f7b3-41f5-9fc5-b8beb0c16edd\") " pod="kube-system/cilium-r9pk2" Sep 6 01:24:21.566072 kubelet[2471]: I0906 01:24:21.565917 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10b9976f-f7b3-41f5-9fc5-b8beb0c16edd-xtables-lock\") pod \"cilium-r9pk2\" (UID: \"10b9976f-f7b3-41f5-9fc5-b8beb0c16edd\") " pod="kube-system/cilium-r9pk2" Sep 6 01:24:21.566072 kubelet[2471]: I0906 01:24:21.565933 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10b9976f-f7b3-41f5-9fc5-b8beb0c16edd-clustermesh-secrets\") pod \"cilium-r9pk2\" (UID: \"10b9976f-f7b3-41f5-9fc5-b8beb0c16edd\") " pod="kube-system/cilium-r9pk2" Sep 6 01:24:21.566232 kubelet[2471]: I0906 01:24:21.565948 2471 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/10b9976f-f7b3-41f5-9fc5-b8beb0c16edd-cilium-ipsec-secrets\") pod \"cilium-r9pk2\" (UID: \"10b9976f-f7b3-41f5-9fc5-b8beb0c16edd\") " pod="kube-system/cilium-r9pk2" Sep 6 01:24:21.836127 env[1468]: time="2025-09-06T01:24:21.833415229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r9pk2,Uid:10b9976f-f7b3-41f5-9fc5-b8beb0c16edd,Namespace:kube-system,Attempt:0,}" Sep 6 01:24:21.876164 env[1468]: time="2025-09-06T01:24:21.876079613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:24:21.876344 env[1468]: time="2025-09-06T01:24:21.876320973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:24:21.876445 env[1468]: time="2025-09-06T01:24:21.876424414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:24:21.877180 env[1468]: time="2025-09-06T01:24:21.876655775Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a6d6392ad0ec2d7057ad67abe77b7332680677a14d463a10d60202cac586cf0 pid=4334 runtime=io.containerd.runc.v2 Sep 6 01:24:21.887523 systemd[1]: Started cri-containerd-3a6d6392ad0ec2d7057ad67abe77b7332680677a14d463a10d60202cac586cf0.scope. Sep 6 01:24:21.909226 env[1468]: time="2025-09-06T01:24:21.909179644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r9pk2,Uid:10b9976f-f7b3-41f5-9fc5-b8beb0c16edd,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a6d6392ad0ec2d7057ad67abe77b7332680677a14d463a10d60202cac586cf0\"" Sep 6 01:24:21.920603 env[1468]: time="2025-09-06T01:24:21.920556602Z" level=info msg="CreateContainer within sandbox \"3a6d6392ad0ec2d7057ad67abe77b7332680677a14d463a10d60202cac586cf0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 01:24:21.960126 env[1468]: time="2025-09-06T01:24:21.960065015Z" level=info msg="CreateContainer within sandbox \"3a6d6392ad0ec2d7057ad67abe77b7332680677a14d463a10d60202cac586cf0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c87020d1eafd5dc42624cf137cd626c502a623a0888bd67c1a7e454c0f1384f9\"" Sep 6 01:24:21.960813 env[1468]: time="2025-09-06T01:24:21.960787458Z" level=info msg="StartContainer for \"c87020d1eafd5dc42624cf137cd626c502a623a0888bd67c1a7e454c0f1384f9\"" Sep 6 01:24:21.975474 systemd[1]: Started cri-containerd-c87020d1eafd5dc42624cf137cd626c502a623a0888bd67c1a7e454c0f1384f9.scope. Sep 6 01:24:22.005383 kubelet[2471]: I0906 01:24:22.005344 2471 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81a2e1d4-9c71-43a6-8e74-16407e235d3e" path="/var/lib/kubelet/pods/81a2e1d4-9c71-43a6-8e74-16407e235d3e/volumes" Sep 6 01:24:22.014804 env[1468]: time="2025-09-06T01:24:22.014758919Z" level=info msg="StartContainer for \"c87020d1eafd5dc42624cf137cd626c502a623a0888bd67c1a7e454c0f1384f9\" returns successfully" Sep 6 01:24:22.019974 systemd[1]: cri-containerd-c87020d1eafd5dc42624cf137cd626c502a623a0888bd67c1a7e454c0f1384f9.scope: Deactivated successfully. Sep 6 01:24:22.061170 env[1468]: time="2025-09-06T01:24:22.061095832Z" level=info msg="shim disconnected" id=c87020d1eafd5dc42624cf137cd626c502a623a0888bd67c1a7e454c0f1384f9 Sep 6 01:24:22.061466 env[1468]: time="2025-09-06T01:24:22.061444034Z" level=warning msg="cleaning up after shim disconnected" id=c87020d1eafd5dc42624cf137cd626c502a623a0888bd67c1a7e454c0f1384f9 namespace=k8s.io Sep 6 01:24:22.061545 env[1468]: time="2025-09-06T01:24:22.061532474Z" level=info msg="cleaning up dead shim" Sep 6 01:24:22.068716 env[1468]: time="2025-09-06T01:24:22.068673978Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4417 runtime=io.containerd.runc.v2\n" Sep 6 01:24:22.448540 env[1468]: time="2025-09-06T01:24:22.448500117Z" level=info msg="CreateContainer within sandbox \"3a6d6392ad0ec2d7057ad67abe77b7332680677a14d463a10d60202cac586cf0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 01:24:22.487921 env[1468]: time="2025-09-06T01:24:22.487866448Z" level=info msg="CreateContainer within sandbox \"3a6d6392ad0ec2d7057ad67abe77b7332680677a14d463a10d60202cac586cf0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b4c03ef12286f93efa5fb737cdf6df7513dfbb770a798ed969bcd070580e16d9\"" Sep 6 01:24:22.488568 env[1468]: time="2025-09-06T01:24:22.488470770Z" level=info msg="StartContainer for \"b4c03ef12286f93efa5fb737cdf6df7513dfbb770a798ed969bcd070580e16d9\"" Sep 6 01:24:22.502550 systemd[1]: Started cri-containerd-b4c03ef12286f93efa5fb737cdf6df7513dfbb770a798ed969bcd070580e16d9.scope. Sep 6 01:24:22.533264 env[1468]: time="2025-09-06T01:24:22.533202078Z" level=info msg="StartContainer for \"b4c03ef12286f93efa5fb737cdf6df7513dfbb770a798ed969bcd070580e16d9\" returns successfully" Sep 6 01:24:22.539962 systemd[1]: cri-containerd-b4c03ef12286f93efa5fb737cdf6df7513dfbb770a798ed969bcd070580e16d9.scope: Deactivated successfully. Sep 6 01:24:22.586908 env[1468]: time="2025-09-06T01:24:22.586861736Z" level=info msg="shim disconnected" id=b4c03ef12286f93efa5fb737cdf6df7513dfbb770a798ed969bcd070580e16d9 Sep 6 01:24:22.587161 env[1468]: time="2025-09-06T01:24:22.587140657Z" level=warning msg="cleaning up after shim disconnected" id=b4c03ef12286f93efa5fb737cdf6df7513dfbb770a798ed969bcd070580e16d9 namespace=k8s.io Sep 6 01:24:22.587229 env[1468]: time="2025-09-06T01:24:22.587216217Z" level=info msg="cleaning up dead shim" Sep 6 01:24:22.597216 env[1468]: time="2025-09-06T01:24:22.597169810Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4480 runtime=io.containerd.runc.v2\n" Sep 6 01:24:23.055201 kubelet[2471]: W0906 01:24:23.055165 2471 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81a2e1d4_9c71_43a6_8e74_16407e235d3e.slice/cri-containerd-dc47edca0f823921452f37fcbb7f04050fe6bf2905b5b6ae6d2e9addf4819cdb.scope WatchSource:0}: container "dc47edca0f823921452f37fcbb7f04050fe6bf2905b5b6ae6d2e9addf4819cdb" in namespace "k8s.io": not found Sep 6 01:24:23.452773 env[1468]: time="2025-09-06T01:24:23.452453705Z" level=info msg="CreateContainer within sandbox \"3a6d6392ad0ec2d7057ad67abe77b7332680677a14d463a10d60202cac586cf0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 01:24:23.476967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount30834013.mount: Deactivated successfully. Sep 6 01:24:23.490738 env[1468]: time="2025-09-06T01:24:23.490686510Z" level=info msg="CreateContainer within sandbox \"3a6d6392ad0ec2d7057ad67abe77b7332680677a14d463a10d60202cac586cf0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e0acb898ba7ee36e5922e51a57c3007480b2558ea5564e1ad8d0ae9dc7872b40\"" Sep 6 01:24:23.491538 env[1468]: time="2025-09-06T01:24:23.491450312Z" level=info msg="StartContainer for \"e0acb898ba7ee36e5922e51a57c3007480b2558ea5564e1ad8d0ae9dc7872b40\"" Sep 6 01:24:23.511025 systemd[1]: Started cri-containerd-e0acb898ba7ee36e5922e51a57c3007480b2558ea5564e1ad8d0ae9dc7872b40.scope. Sep 6 01:24:23.542709 systemd[1]: cri-containerd-e0acb898ba7ee36e5922e51a57c3007480b2558ea5564e1ad8d0ae9dc7872b40.scope: Deactivated successfully. Sep 6 01:24:23.544493 env[1468]: time="2025-09-06T01:24:23.544431286Z" level=info msg="StartContainer for \"e0acb898ba7ee36e5922e51a57c3007480b2558ea5564e1ad8d0ae9dc7872b40\" returns successfully" Sep 6 01:24:23.576846 env[1468]: time="2025-09-06T01:24:23.576801271Z" level=info msg="shim disconnected" id=e0acb898ba7ee36e5922e51a57c3007480b2558ea5564e1ad8d0ae9dc7872b40 Sep 6 01:24:23.577159 env[1468]: time="2025-09-06T01:24:23.577137873Z" level=warning msg="cleaning up after shim disconnected" id=e0acb898ba7ee36e5922e51a57c3007480b2558ea5564e1ad8d0ae9dc7872b40 namespace=k8s.io Sep 6 01:24:23.577247 env[1468]: time="2025-09-06T01:24:23.577233313Z" level=info msg="cleaning up dead shim" Sep 6 01:24:23.584858 env[1468]: time="2025-09-06T01:24:23.584815498Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4536 runtime=io.containerd.runc.v2\n" Sep 6 01:24:23.672545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0acb898ba7ee36e5922e51a57c3007480b2558ea5564e1ad8d0ae9dc7872b40-rootfs.mount: Deactivated successfully. Sep 6 01:24:24.003184 kubelet[2471]: E0906 01:24:24.003142 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-5bz7p" podUID="0070d575-4634-49a3-a251-8cb5505cf132" Sep 6 01:24:24.379241 kubelet[2471]: I0906 01:24:24.379126 2471 setters.go:618] "Node became not ready" node="ci-3510.3.8-n-4d72badcbe" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T01:24:24Z","lastTransitionTime":"2025-09-06T01:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 01:24:24.460356 env[1468]: time="2025-09-06T01:24:24.460302137Z" level=info msg="CreateContainer within sandbox \"3a6d6392ad0ec2d7057ad67abe77b7332680677a14d463a10d60202cac586cf0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 01:24:24.499236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount889137074.mount: Deactivated successfully. Sep 6 01:24:24.505715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3548085354.mount: Deactivated successfully. Sep 6 01:24:24.519036 env[1468]: time="2025-09-06T01:24:24.518963086Z" level=info msg="CreateContainer within sandbox \"3a6d6392ad0ec2d7057ad67abe77b7332680677a14d463a10d60202cac586cf0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0708f7f332281c7328808511a689716e21010089c2124fa3210cf462899db8cd\"" Sep 6 01:24:24.519843 env[1468]: time="2025-09-06T01:24:24.519813049Z" level=info msg="StartContainer for \"0708f7f332281c7328808511a689716e21010089c2124fa3210cf462899db8cd\"" Sep 6 01:24:24.535467 systemd[1]: Started cri-containerd-0708f7f332281c7328808511a689716e21010089c2124fa3210cf462899db8cd.scope. Sep 6 01:24:24.562552 systemd[1]: cri-containerd-0708f7f332281c7328808511a689716e21010089c2124fa3210cf462899db8cd.scope: Deactivated successfully. Sep 6 01:24:24.564420 env[1468]: time="2025-09-06T01:24:24.564382833Z" level=info msg="StartContainer for \"0708f7f332281c7328808511a689716e21010089c2124fa3210cf462899db8cd\" returns successfully" Sep 6 01:24:24.597012 env[1468]: time="2025-09-06T01:24:24.596960898Z" level=info msg="shim disconnected" id=0708f7f332281c7328808511a689716e21010089c2124fa3210cf462899db8cd Sep 6 01:24:24.597012 env[1468]: time="2025-09-06T01:24:24.597009578Z" level=warning msg="cleaning up after shim disconnected" id=0708f7f332281c7328808511a689716e21010089c2124fa3210cf462899db8cd namespace=k8s.io Sep 6 01:24:24.597012 env[1468]: time="2025-09-06T01:24:24.597021738Z" level=info msg="cleaning up dead shim" Sep 6 01:24:24.604492 env[1468]: time="2025-09-06T01:24:24.604443402Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4591 runtime=io.containerd.runc.v2\n" Sep 6 01:24:25.096868 kubelet[2471]: E0906 01:24:25.096657 2471 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 01:24:25.459462 env[1468]: time="2025-09-06T01:24:25.459353054Z" level=info msg="CreateContainer within sandbox \"3a6d6392ad0ec2d7057ad67abe77b7332680677a14d463a10d60202cac586cf0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 01:24:25.502873 env[1468]: time="2025-09-06T01:24:25.502825152Z" level=info msg="CreateContainer within sandbox \"3a6d6392ad0ec2d7057ad67abe77b7332680677a14d463a10d60202cac586cf0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f3ea69b53747ef405118981932c440d1965f7f8b8a7809b6998d4c342d979e14\"" Sep 6 01:24:25.503967 env[1468]: time="2025-09-06T01:24:25.503940116Z" level=info msg="StartContainer for \"f3ea69b53747ef405118981932c440d1965f7f8b8a7809b6998d4c342d979e14\"" Sep 6 01:24:25.525194 systemd[1]: Started cri-containerd-f3ea69b53747ef405118981932c440d1965f7f8b8a7809b6998d4c342d979e14.scope. Sep 6 01:24:25.556345 env[1468]: time="2025-09-06T01:24:25.556257162Z" level=info msg="StartContainer for \"f3ea69b53747ef405118981932c440d1965f7f8b8a7809b6998d4c342d979e14\" returns successfully" Sep 6 01:24:25.672760 systemd[1]: run-containerd-runc-k8s.io-f3ea69b53747ef405118981932c440d1965f7f8b8a7809b6998d4c342d979e14-runc.mCuY9a.mount: Deactivated successfully. Sep 6 01:24:25.892133 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 6 01:24:26.003884 kubelet[2471]: E0906 01:24:26.003356 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-5bz7p" podUID="0070d575-4634-49a3-a251-8cb5505cf132" Sep 6 01:24:26.170526 kubelet[2471]: W0906 01:24:26.170308 2471 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10b9976f_f7b3_41f5_9fc5_b8beb0c16edd.slice/cri-containerd-c87020d1eafd5dc42624cf137cd626c502a623a0888bd67c1a7e454c0f1384f9.scope WatchSource:0}: task c87020d1eafd5dc42624cf137cd626c502a623a0888bd67c1a7e454c0f1384f9 not found Sep 6 01:24:26.494926 kubelet[2471]: I0906 01:24:26.494855 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r9pk2" podStartSLOduration=5.494837998 podStartE2EDuration="5.494837998s" podCreationTimestamp="2025-09-06 01:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:24:26.494291197 +0000 UTC m=+186.648736613" watchObservedRunningTime="2025-09-06 01:24:26.494837998 +0000 UTC m=+186.649283414" Sep 6 01:24:28.003324 kubelet[2471]: E0906 01:24:28.003281 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-5bz7p" podUID="0070d575-4634-49a3-a251-8cb5505cf132" Sep 6 01:24:28.618268 systemd-networkd[1620]: lxc_health: Link UP Sep 6 01:24:28.632160 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 01:24:28.632487 systemd-networkd[1620]: lxc_health: Gained carrier Sep 6 01:24:29.276274 kubelet[2471]: W0906 01:24:29.276227 2471 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10b9976f_f7b3_41f5_9fc5_b8beb0c16edd.slice/cri-containerd-b4c03ef12286f93efa5fb737cdf6df7513dfbb770a798ed969bcd070580e16d9.scope WatchSource:0}: task b4c03ef12286f93efa5fb737cdf6df7513dfbb770a798ed969bcd070580e16d9 not found Sep 6 01:24:30.003678 kubelet[2471]: E0906 01:24:30.003611 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-5bz7p" podUID="0070d575-4634-49a3-a251-8cb5505cf132" Sep 6 01:24:30.051366 systemd-networkd[1620]: lxc_health: Gained IPv6LL Sep 6 01:24:31.698583 systemd[1]: run-containerd-runc-k8s.io-f3ea69b53747ef405118981932c440d1965f7f8b8a7809b6998d4c342d979e14-runc.hZpTAj.mount: Deactivated successfully. Sep 6 01:24:32.383014 kubelet[2471]: W0906 01:24:32.382943 2471 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10b9976f_f7b3_41f5_9fc5_b8beb0c16edd.slice/cri-containerd-e0acb898ba7ee36e5922e51a57c3007480b2558ea5564e1ad8d0ae9dc7872b40.scope WatchSource:0}: task e0acb898ba7ee36e5922e51a57c3007480b2558ea5564e1ad8d0ae9dc7872b40 not found Sep 6 01:24:33.827864 systemd[1]: run-containerd-runc-k8s.io-f3ea69b53747ef405118981932c440d1965f7f8b8a7809b6998d4c342d979e14-runc.4Ry7JR.mount: Deactivated successfully. Sep 6 01:24:35.489892 kubelet[2471]: W0906 01:24:35.489842 2471 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10b9976f_f7b3_41f5_9fc5_b8beb0c16edd.slice/cri-containerd-0708f7f332281c7328808511a689716e21010089c2124fa3210cf462899db8cd.scope WatchSource:0}: task 0708f7f332281c7328808511a689716e21010089c2124fa3210cf462899db8cd not found Sep 6 01:24:35.967079 systemd[1]: run-containerd-runc-k8s.io-f3ea69b53747ef405118981932c440d1965f7f8b8a7809b6998d4c342d979e14-runc.uBCo3g.mount: Deactivated successfully. Sep 6 01:24:36.120426 sshd[4280]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:36.123030 systemd-logind[1457]: Session 25 logged out. Waiting for processes to exit. Sep 6 01:24:36.123218 systemd[1]: sshd@22-10.200.20.15:22-10.200.16.10:42606.service: Deactivated successfully. Sep 6 01:24:36.123887 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 01:24:36.124624 systemd-logind[1457]: Removed session 25.