Nov 1 00:19:29.018727 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 1 00:19:29.018745 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Oct 31 23:12:38 -00 2025 Nov 1 00:19:29.018753 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Nov 1 00:19:29.018760 kernel: printk: bootconsole [pl11] enabled Nov 1 00:19:29.018765 kernel: efi: EFI v2.70 by EDK II Nov 1 00:19:29.018770 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3761cf98 Nov 1 00:19:29.018777 kernel: random: crng init done Nov 1 00:19:29.018782 kernel: ACPI: Early table checksum verification disabled Nov 1 00:19:29.018788 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Nov 1 00:19:29.018793 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:19:29.018799 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:19:29.018804 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 1 00:19:29.018810 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:19:29.018816 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:19:29.018822 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:19:29.018828 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:19:29.018834 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:19:29.018841 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:19:29.018852 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Nov 1 00:19:29.018858 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:19:29.018864 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Nov 1 00:19:29.018869 kernel: NUMA: Failed to initialise from firmware Nov 1 00:19:29.018875 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Nov 1 00:19:29.018881 kernel: NUMA: NODE_DATA [mem 0x1bf7f3900-0x1bf7f8fff] Nov 1 00:19:29.018886 kernel: Zone ranges: Nov 1 00:19:29.018892 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Nov 1 00:19:29.018897 kernel: DMA32 empty Nov 1 00:19:29.018903 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Nov 1 00:19:29.018910 kernel: Movable zone start for each node Nov 1 00:19:29.018916 kernel: Early memory node ranges Nov 1 00:19:29.018933 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Nov 1 00:19:29.018940 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Nov 1 00:19:29.018946 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Nov 1 00:19:29.018952 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Nov 1 00:19:29.018957 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Nov 1 00:19:29.018963 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Nov 1 00:19:29.018968 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Nov 1 00:19:29.018974 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Nov 1 00:19:29.018980 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Nov 1 00:19:29.018986 kernel: psci: probing for conduit method from ACPI. Nov 1 00:19:29.018995 kernel: psci: PSCIv1.1 detected in firmware. Nov 1 00:19:29.019001 kernel: psci: Using standard PSCI v0.2 function IDs Nov 1 00:19:29.019007 kernel: psci: MIGRATE_INFO_TYPE not supported. Nov 1 00:19:29.019013 kernel: psci: SMC Calling Convention v1.4 Nov 1 00:19:29.019019 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Nov 1 00:19:29.019027 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Nov 1 00:19:29.019033 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Nov 1 00:19:29.019039 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Nov 1 00:19:29.019045 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 1 00:19:29.019051 kernel: Detected PIPT I-cache on CPU0 Nov 1 00:19:29.019057 kernel: CPU features: detected: GIC system register CPU interface Nov 1 00:19:29.019064 kernel: CPU features: detected: Hardware dirty bit management Nov 1 00:19:29.019069 kernel: CPU features: detected: Spectre-BHB Nov 1 00:19:29.019076 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 1 00:19:29.019082 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 1 00:19:29.019088 kernel: CPU features: detected: ARM erratum 1418040 Nov 1 00:19:29.019095 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Nov 1 00:19:29.019101 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 1 00:19:29.019107 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Nov 1 00:19:29.019113 kernel: Policy zone: Normal Nov 1 00:19:29.019120 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=284392058f112e827cd7c521dcce1be27e1367d0030df494642d12e41e342e29 Nov 1 00:19:29.019127 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:19:29.019133 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:19:29.019139 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:19:29.019145 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:19:29.019151 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Nov 1 00:19:29.019158 kernel: Memory: 3986880K/4194160K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 207280K reserved, 0K cma-reserved) Nov 1 00:19:29.019166 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:19:29.019172 kernel: trace event string verifier disabled Nov 1 00:19:29.019178 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:19:29.019184 kernel: rcu: RCU event tracing is enabled. Nov 1 00:19:29.019190 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:19:29.019196 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:19:29.019202 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:19:29.019208 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:19:29.019215 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:19:29.019221 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 1 00:19:29.019226 kernel: GICv3: 960 SPIs implemented Nov 1 00:19:29.019234 kernel: GICv3: 0 Extended SPIs implemented Nov 1 00:19:29.019240 kernel: GICv3: Distributor has no Range Selector support Nov 1 00:19:29.019245 kernel: Root IRQ handler: gic_handle_irq Nov 1 00:19:29.019251 kernel: GICv3: 16 PPIs implemented Nov 1 00:19:29.019257 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Nov 1 00:19:29.019263 kernel: ITS: No ITS available, not enabling LPIs Nov 1 00:19:29.019270 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 1 00:19:29.019276 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 1 00:19:29.019282 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 1 00:19:29.019288 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 1 00:19:29.019294 kernel: Console: colour dummy device 80x25 Nov 1 00:19:29.019302 kernel: printk: console [tty1] enabled Nov 1 00:19:29.019308 kernel: ACPI: Core revision 20210730 Nov 1 00:19:29.019315 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 1 00:19:29.019321 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:19:29.019327 kernel: LSM: Security Framework initializing Nov 1 00:19:29.019333 kernel: SELinux: Initializing. Nov 1 00:19:29.019340 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:19:29.019346 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:19:29.019352 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Nov 1 00:19:29.019360 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Nov 1 00:19:29.019366 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:19:29.019372 kernel: Remapping and enabling EFI services. Nov 1 00:19:29.019378 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:19:29.019384 kernel: Detected PIPT I-cache on CPU1 Nov 1 00:19:29.019391 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Nov 1 00:19:29.019397 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 1 00:19:29.019403 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 1 00:19:29.019409 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:19:29.019415 kernel: SMP: Total of 2 processors activated. Nov 1 00:19:29.019422 kernel: CPU features: detected: 32-bit EL0 Support Nov 1 00:19:29.019429 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Nov 1 00:19:29.019435 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 1 00:19:29.019441 kernel: CPU features: detected: CRC32 instructions Nov 1 00:19:29.019448 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 1 00:19:29.019454 kernel: CPU features: detected: LSE atomic instructions Nov 1 00:19:29.019460 kernel: CPU features: detected: Privileged Access Never Nov 1 00:19:29.019466 kernel: CPU: All CPU(s) started at EL1 Nov 1 00:19:29.019472 kernel: alternatives: patching kernel code Nov 1 00:19:29.019480 kernel: devtmpfs: initialized Nov 1 00:19:29.019490 kernel: KASLR enabled Nov 1 00:19:29.019497 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:19:29.019505 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:19:29.019511 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:19:29.019517 kernel: SMBIOS 3.1.0 present. Nov 1 00:19:29.019524 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Nov 1 00:19:29.019530 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:19:29.019537 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 1 00:19:29.019545 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 1 00:19:29.019552 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 1 00:19:29.019558 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:19:29.019565 kernel: audit: type=2000 audit(0.087:1): state=initialized audit_enabled=0 res=1 Nov 1 00:19:29.019572 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:19:29.019578 kernel: cpuidle: using governor menu Nov 1 00:19:29.019585 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 1 00:19:29.019593 kernel: ASID allocator initialised with 32768 entries Nov 1 00:19:29.019599 kernel: ACPI: bus type PCI registered Nov 1 00:19:29.019606 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:19:29.019613 kernel: Serial: AMBA PL011 UART driver Nov 1 00:19:29.019619 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:19:29.019626 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Nov 1 00:19:29.019632 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:19:29.019639 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Nov 1 00:19:29.019646 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:19:29.019653 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 1 00:19:29.019660 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:19:29.019666 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:19:29.019673 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:19:29.019679 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:19:29.019686 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:19:29.019692 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:19:29.019699 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:19:29.019705 kernel: ACPI: Interpreter enabled Nov 1 00:19:29.019713 kernel: ACPI: Using GIC for interrupt routing Nov 1 00:19:29.019720 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Nov 1 00:19:29.019726 kernel: printk: console [ttyAMA0] enabled Nov 1 00:19:29.019733 kernel: printk: bootconsole [pl11] disabled Nov 1 00:19:29.019739 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Nov 1 00:19:29.019746 kernel: iommu: Default domain type: Translated Nov 1 00:19:29.019752 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 1 00:19:29.019759 kernel: vgaarb: loaded Nov 1 00:19:29.019765 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:19:29.019772 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:19:29.019780 kernel: PTP clock support registered Nov 1 00:19:29.019787 kernel: Registered efivars operations Nov 1 00:19:29.019793 kernel: No ACPI PMU IRQ for CPU0 Nov 1 00:19:29.019799 kernel: No ACPI PMU IRQ for CPU1 Nov 1 00:19:29.019806 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 1 00:19:29.019813 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:19:29.019819 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:19:29.019826 kernel: pnp: PnP ACPI init Nov 1 00:19:29.019832 kernel: pnp: PnP ACPI: found 0 devices Nov 1 00:19:29.019840 kernel: NET: Registered PF_INET protocol family Nov 1 00:19:29.019847 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:19:29.019853 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:19:29.019860 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:19:29.019867 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:19:29.019873 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Nov 1 00:19:29.019880 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:19:29.019886 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:19:29.019894 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:19:29.019901 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:19:29.019907 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:19:29.019914 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Nov 1 00:19:29.019920 kernel: kvm [1]: HYP mode not available Nov 1 00:19:29.019940 kernel: Initialise system trusted keyrings Nov 1 00:19:29.019947 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:19:29.019953 kernel: Key type asymmetric registered Nov 1 00:19:29.019960 kernel: Asymmetric key parser 'x509' registered Nov 1 00:19:29.019968 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:19:29.019974 kernel: io scheduler mq-deadline registered Nov 1 00:19:29.019981 kernel: io scheduler kyber registered Nov 1 00:19:29.019987 kernel: io scheduler bfq registered Nov 1 00:19:29.019994 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:19:29.020000 kernel: thunder_xcv, ver 1.0 Nov 1 00:19:29.020007 kernel: thunder_bgx, ver 1.0 Nov 1 00:19:29.020013 kernel: nicpf, ver 1.0 Nov 1 00:19:29.020019 kernel: nicvf, ver 1.0 Nov 1 00:19:29.020135 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 1 00:19:29.020198 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-01T00:19:28 UTC (1761956368) Nov 1 00:19:29.020207 kernel: efifb: probing for efifb Nov 1 00:19:29.020214 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 1 00:19:29.020220 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 1 00:19:29.020227 kernel: efifb: scrolling: redraw Nov 1 00:19:29.020233 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 1 00:19:29.020240 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 00:19:29.020248 kernel: fb0: EFI VGA frame buffer device Nov 1 00:19:29.020255 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Nov 1 00:19:29.020262 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 00:19:29.020268 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:19:29.020275 kernel: Segment Routing with IPv6 Nov 1 00:19:29.020281 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:19:29.020288 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:19:29.020295 kernel: Key type dns_resolver registered Nov 1 00:19:29.020301 kernel: registered taskstats version 1 Nov 1 00:19:29.020308 kernel: Loading compiled-in X.509 certificates Nov 1 00:19:29.020316 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 4aa5071b9a6f96878595e36d4bd5862a671c915d' Nov 1 00:19:29.020322 kernel: Key type .fscrypt registered Nov 1 00:19:29.020329 kernel: Key type fscrypt-provisioning registered Nov 1 00:19:29.020335 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:19:29.020342 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:19:29.020348 kernel: ima: No architecture policies found Nov 1 00:19:29.020355 kernel: clk: Disabling unused clocks Nov 1 00:19:29.020361 kernel: Freeing unused kernel memory: 36416K Nov 1 00:19:29.020369 kernel: Run /init as init process Nov 1 00:19:29.020375 kernel: with arguments: Nov 1 00:19:29.020381 kernel: /init Nov 1 00:19:29.020388 kernel: with environment: Nov 1 00:19:29.020394 kernel: HOME=/ Nov 1 00:19:29.020400 kernel: TERM=linux Nov 1 00:19:29.020407 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:19:29.020415 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:19:29.020425 systemd[1]: Detected virtualization microsoft. Nov 1 00:19:29.020433 systemd[1]: Detected architecture arm64. Nov 1 00:19:29.020440 systemd[1]: Running in initrd. Nov 1 00:19:29.020446 systemd[1]: No hostname configured, using default hostname. Nov 1 00:19:29.020453 systemd[1]: Hostname set to . Nov 1 00:19:29.020461 systemd[1]: Initializing machine ID from random generator. Nov 1 00:19:29.020468 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:19:29.020475 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:19:29.020483 systemd[1]: Reached target cryptsetup.target. Nov 1 00:19:29.020490 systemd[1]: Reached target paths.target. Nov 1 00:19:29.020497 systemd[1]: Reached target slices.target. Nov 1 00:19:29.020504 systemd[1]: Reached target swap.target. Nov 1 00:19:29.020511 systemd[1]: Reached target timers.target. Nov 1 00:19:29.020518 systemd[1]: Listening on iscsid.socket. Nov 1 00:19:29.020525 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:19:29.020532 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:19:29.020540 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:19:29.020548 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:19:29.020555 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:19:29.020562 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:19:29.020569 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:19:29.020576 systemd[1]: Reached target sockets.target. Nov 1 00:19:29.020583 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:19:29.020590 systemd[1]: Finished network-cleanup.service. Nov 1 00:19:29.020597 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:19:29.020605 systemd[1]: Starting systemd-journald.service... Nov 1 00:19:29.020612 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:19:29.020619 systemd[1]: Starting systemd-resolved.service... Nov 1 00:19:29.020630 systemd-journald[276]: Journal started Nov 1 00:19:29.020668 systemd-journald[276]: Runtime Journal (/run/log/journal/fa8f14bd3f0a4b929ab7b97c43b8a12c) is 8.0M, max 78.5M, 70.5M free. Nov 1 00:19:29.013965 systemd-modules-load[277]: Inserted module 'overlay' Nov 1 00:19:29.042278 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:19:29.056946 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:19:29.069580 systemd[1]: Started systemd-journald.service. Nov 1 00:19:29.069635 kernel: Bridge firewalling registered Nov 1 00:19:29.069745 systemd-modules-load[277]: Inserted module 'br_netfilter' Nov 1 00:19:29.104401 kernel: audit: type=1130 audit(1761956369.069:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.104424 kernel: SCSI subsystem initialized Nov 1 00:19:29.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.092534 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:19:29.094184 systemd-resolved[278]: Positive Trust Anchors: Nov 1 00:19:29.094191 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:19:29.156654 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:19:29.156683 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:19:29.156692 kernel: audit: type=1130 audit(1761956369.137:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.094220 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:19:29.218331 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:19:29.218355 kernel: audit: type=1130 audit(1761956369.203:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.096404 systemd-resolved[278]: Defaulting to hostname 'linux'. Nov 1 00:19:29.138216 systemd[1]: Started systemd-resolved.service. Nov 1 00:19:29.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.203347 systemd-modules-load[277]: Inserted module 'dm_multipath' Nov 1 00:19:29.279400 kernel: audit: type=1130 audit(1761956369.229:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.279436 kernel: audit: type=1130 audit(1761956369.253:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.220110 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:19:29.248682 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:19:29.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.254250 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:19:29.279776 systemd[1]: Reached target nss-lookup.target. Nov 1 00:19:29.305751 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:19:29.344180 kernel: audit: type=1130 audit(1761956369.279:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.314261 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:19:29.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.332682 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:19:29.372179 kernel: audit: type=1130 audit(1761956369.344:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.340635 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:19:29.367260 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:19:29.382043 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:19:29.410674 kernel: audit: type=1130 audit(1761956369.380:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.410602 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:19:29.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.437104 kernel: audit: type=1130 audit(1761956369.406:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.439301 dracut-cmdline[298]: dracut-dracut-053 Nov 1 00:19:29.444326 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=284392058f112e827cd7c521dcce1be27e1367d0030df494642d12e41e342e29 Nov 1 00:19:29.530943 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:19:29.546948 kernel: iscsi: registered transport (tcp) Nov 1 00:19:29.567750 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:19:29.567775 kernel: QLogic iSCSI HBA Driver Nov 1 00:19:29.597695 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:19:29.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:29.605403 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:19:29.657944 kernel: raid6: neonx8 gen() 13756 MB/s Nov 1 00:19:29.677947 kernel: raid6: neonx8 xor() 10784 MB/s Nov 1 00:19:29.697936 kernel: raid6: neonx4 gen() 13532 MB/s Nov 1 00:19:29.718934 kernel: raid6: neonx4 xor() 10975 MB/s Nov 1 00:19:29.738934 kernel: raid6: neonx2 gen() 12955 MB/s Nov 1 00:19:29.758932 kernel: raid6: neonx2 xor() 10244 MB/s Nov 1 00:19:29.779933 kernel: raid6: neonx1 gen() 10568 MB/s Nov 1 00:19:29.799934 kernel: raid6: neonx1 xor() 8795 MB/s Nov 1 00:19:29.819935 kernel: raid6: int64x8 gen() 6276 MB/s Nov 1 00:19:29.840935 kernel: raid6: int64x8 xor() 3544 MB/s Nov 1 00:19:29.860932 kernel: raid6: int64x4 gen() 7208 MB/s Nov 1 00:19:29.880931 kernel: raid6: int64x4 xor() 3857 MB/s Nov 1 00:19:29.901934 kernel: raid6: int64x2 gen() 6155 MB/s Nov 1 00:19:29.921933 kernel: raid6: int64x2 xor() 3322 MB/s Nov 1 00:19:29.941933 kernel: raid6: int64x1 gen() 5049 MB/s Nov 1 00:19:29.967731 kernel: raid6: int64x1 xor() 2647 MB/s Nov 1 00:19:29.967747 kernel: raid6: using algorithm neonx8 gen() 13756 MB/s Nov 1 00:19:29.967755 kernel: raid6: .... xor() 10784 MB/s, rmw enabled Nov 1 00:19:29.971998 kernel: raid6: using neon recovery algorithm Nov 1 00:19:29.993098 kernel: xor: measuring software checksum speed Nov 1 00:19:29.993110 kernel: 8regs : 17188 MB/sec Nov 1 00:19:29.997028 kernel: 32regs : 20639 MB/sec Nov 1 00:19:30.000818 kernel: arm64_neon : 27719 MB/sec Nov 1 00:19:30.000828 kernel: xor: using function: arm64_neon (27719 MB/sec) Nov 1 00:19:30.061940 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Nov 1 00:19:30.070982 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:19:30.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:30.080000 audit: BPF prog-id=7 op=LOAD Nov 1 00:19:30.080000 audit: BPF prog-id=8 op=LOAD Nov 1 00:19:30.080875 systemd[1]: Starting systemd-udevd.service... Nov 1 00:19:30.100320 systemd-udevd[475]: Using default interface naming scheme 'v252'. Nov 1 00:19:30.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:30.107904 systemd[1]: Started systemd-udevd.service. Nov 1 00:19:30.115027 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:19:30.130870 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Nov 1 00:19:30.161913 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:19:30.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:30.167687 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:19:30.202881 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:19:30.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:30.259975 kernel: hv_vmbus: Vmbus version:5.3 Nov 1 00:19:30.283156 kernel: hv_vmbus: registering driver hid_hyperv Nov 1 00:19:30.283203 kernel: hv_vmbus: registering driver hv_storvsc Nov 1 00:19:30.283212 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 1 00:19:30.292251 kernel: scsi host0: storvsc_host_t Nov 1 00:19:30.292464 kernel: hv_vmbus: registering driver hv_netvsc Nov 1 00:19:30.292475 kernel: scsi host1: storvsc_host_t Nov 1 00:19:30.303453 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Nov 1 00:19:30.311204 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 1 00:19:30.311259 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 1 00:19:30.321944 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Nov 1 00:19:30.339262 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Nov 1 00:19:30.360785 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 1 00:19:30.362070 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:19:30.362090 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 1 00:19:30.377462 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 1 00:19:30.412625 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 1 00:19:30.412798 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 1 00:19:30.412877 kernel: hv_netvsc 002248b6-5350-0022-48b6-5350002248b6 eth0: VF slot 1 added Nov 1 00:19:30.412985 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 1 00:19:30.413071 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 1 00:19:30.413149 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:19:30.413158 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 1 00:19:30.426257 kernel: hv_vmbus: registering driver hv_pci Nov 1 00:19:30.426319 kernel: hv_pci e574c296-5b68-45dd-b483-fda68e04adbd: PCI VMBus probing: Using version 0x10004 Nov 1 00:19:30.508854 kernel: hv_pci e574c296-5b68-45dd-b483-fda68e04adbd: PCI host bridge to bus 5b68:00 Nov 1 00:19:30.508979 kernel: pci_bus 5b68:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Nov 1 00:19:30.509080 kernel: pci_bus 5b68:00: No busn resource found for root bus, will use [bus 00-ff] Nov 1 00:19:30.509151 kernel: pci 5b68:00:02.0: [15b3:1018] type 00 class 0x020000 Nov 1 00:19:30.509242 kernel: pci 5b68:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Nov 1 00:19:30.509320 kernel: pci 5b68:00:02.0: enabling Extended Tags Nov 1 00:19:30.509396 kernel: pci 5b68:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5b68:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Nov 1 00:19:30.509474 kernel: pci_bus 5b68:00: busn_res: [bus 00-ff] end is updated to 00 Nov 1 00:19:30.509545 kernel: pci 5b68:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Nov 1 00:19:30.547895 kernel: mlx5_core 5b68:00:02.0: enabling device (0000 -> 0002) Nov 1 00:19:30.781974 kernel: mlx5_core 5b68:00:02.0: firmware version: 16.30.1284 Nov 1 00:19:30.782098 kernel: mlx5_core 5b68:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Nov 1 00:19:30.782178 kernel: hv_netvsc 002248b6-5350-0022-48b6-5350002248b6 eth0: VF registering: eth1 Nov 1 00:19:30.782259 kernel: mlx5_core 5b68:00:02.0 eth1: joined to eth0 Nov 1 00:19:30.790952 kernel: mlx5_core 5b68:00:02.0 enP23400s1: renamed from eth1 Nov 1 00:19:30.917956 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (533) Nov 1 00:19:30.919775 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:19:30.936186 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:19:31.187743 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:19:31.194016 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:19:31.206947 systemd[1]: Starting disk-uuid.service... Nov 1 00:19:31.215203 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:19:31.250029 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:19:31.259945 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:19:32.275466 disk-uuid[607]: The operation has completed successfully. Nov 1 00:19:32.281586 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:19:32.354210 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:19:32.356081 systemd[1]: Finished disk-uuid.service. Nov 1 00:19:32.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:32.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:32.368598 systemd[1]: Starting verity-setup.service... Nov 1 00:19:32.417954 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 1 00:19:32.833952 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:19:32.840410 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:19:32.853078 systemd[1]: Finished verity-setup.service. Nov 1 00:19:32.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:32.920943 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:19:32.921068 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:19:32.925429 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:19:32.926207 systemd[1]: Starting ignition-setup.service... Nov 1 00:19:32.942541 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:19:32.978219 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 1 00:19:32.978281 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:19:32.983032 kernel: BTRFS info (device sda6): has skinny extents Nov 1 00:19:33.026207 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:19:33.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:33.043056 kernel: kauditd_printk_skb: 10 callbacks suppressed Nov 1 00:19:33.043100 kernel: audit: type=1130 audit(1761956373.030:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:33.041577 systemd[1]: Starting systemd-networkd.service... Nov 1 00:19:33.074326 kernel: audit: type=1334 audit(1761956373.040:22): prog-id=9 op=LOAD Nov 1 00:19:33.040000 audit: BPF prog-id=9 op=LOAD Nov 1 00:19:33.089626 systemd-networkd[874]: lo: Link UP Nov 1 00:19:33.089633 systemd-networkd[874]: lo: Gained carrier Nov 1 00:19:33.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:33.090060 systemd-networkd[874]: Enumeration completed Nov 1 00:19:33.134704 kernel: audit: type=1130 audit(1761956373.098:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:33.090154 systemd[1]: Started systemd-networkd.service. Nov 1 00:19:33.098837 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:19:33.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:33.119021 systemd[1]: Reached target network.target. Nov 1 00:19:33.173732 kernel: audit: type=1130 audit(1761956373.142:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:33.126855 systemd[1]: Starting iscsiuio.service... Nov 1 00:19:33.134425 systemd[1]: Started iscsiuio.service. Nov 1 00:19:33.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:33.200741 iscsid[881]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:19:33.200741 iscsid[881]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Nov 1 00:19:33.200741 iscsid[881]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:19:33.200741 iscsid[881]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:19:33.200741 iscsid[881]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:19:33.200741 iscsid[881]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:19:33.200741 iscsid[881]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:19:33.306177 kernel: audit: type=1130 audit(1761956373.181:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:33.306201 kernel: audit: type=1130 audit(1761956373.268:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:33.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:33.165615 systemd[1]: Starting iscsid.service... Nov 1 00:19:33.173366 systemd[1]: Started iscsid.service. Nov 1 00:19:33.210014 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:19:33.257174 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:19:33.269780 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:19:33.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:33.299106 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:19:33.371032 kernel: audit: type=1130 audit(1761956373.343:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:33.311497 systemd[1]: Reached target remote-fs.target. Nov 1 00:19:33.321175 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:19:33.328767 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:19:33.336191 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:19:33.407968 kernel: mlx5_core 5b68:00:02.0 enP23400s1: Link up Nov 1 00:19:33.414940 kernel: buffer_size[0]=0 is not enough for lossless buffer Nov 1 00:19:33.457947 kernel: hv_netvsc 002248b6-5350-0022-48b6-5350002248b6 eth0: Data path switched to VF: enP23400s1 Nov 1 00:19:33.465131 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:19:33.464744 systemd-networkd[874]: enP23400s1: Link UP Nov 1 00:19:33.464830 systemd-networkd[874]: eth0: Link UP Nov 1 00:19:33.464971 systemd-networkd[874]: eth0: Gained carrier Nov 1 00:19:33.478840 systemd-networkd[874]: enP23400s1: Gained carrier Nov 1 00:19:33.493010 systemd-networkd[874]: eth0: DHCPv4 address 10.200.20.48/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 1 00:19:33.620420 systemd[1]: Finished ignition-setup.service. Nov 1 00:19:33.649037 kernel: audit: type=1130 audit(1761956373.624:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:33.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:33.626181 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:19:35.355075 systemd-networkd[874]: eth0: Gained IPv6LL Nov 1 00:19:37.188598 ignition[901]: Ignition 2.14.0 Nov 1 00:19:37.188610 ignition[901]: Stage: fetch-offline Nov 1 00:19:37.188663 ignition[901]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:19:37.188687 ignition[901]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:19:37.278014 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:19:37.278183 ignition[901]: parsed url from cmdline: "" Nov 1 00:19:37.278187 ignition[901]: no config URL provided Nov 1 00:19:37.278193 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:19:37.319165 kernel: audit: type=1130 audit(1761956377.293:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:37.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:37.285178 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:19:37.278200 ignition[901]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:19:37.294963 systemd[1]: Starting ignition-fetch.service... Nov 1 00:19:37.278206 ignition[901]: failed to fetch config: resource requires networking Nov 1 00:19:37.278428 ignition[901]: Ignition finished successfully Nov 1 00:19:37.307322 ignition[907]: Ignition 2.14.0 Nov 1 00:19:37.307327 ignition[907]: Stage: fetch Nov 1 00:19:37.307442 ignition[907]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:19:37.307465 ignition[907]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:19:37.310646 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:19:37.322139 ignition[907]: parsed url from cmdline: "" Nov 1 00:19:37.322145 ignition[907]: no config URL provided Nov 1 00:19:37.322165 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:19:37.322186 ignition[907]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:19:37.322241 ignition[907]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 1 00:19:37.423300 ignition[907]: GET result: OK Nov 1 00:19:37.423388 ignition[907]: config has been read from IMDS userdata Nov 1 00:19:37.426997 unknown[907]: fetched base config from "system" Nov 1 00:19:37.461936 kernel: audit: type=1130 audit(1761956377.437:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:37.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:37.423431 ignition[907]: parsing config with SHA512: c2b58b5aa288a779a5158673777116edc8600e297305f8fc491b7ea70832cb4e46c9157c90023895d9c7f73c6df9a101871081e646e73f773624e4ded5517b02 Nov 1 00:19:37.427004 unknown[907]: fetched base config from "system" Nov 1 00:19:37.427567 ignition[907]: fetch: fetch complete Nov 1 00:19:37.427021 unknown[907]: fetched user config from "azure" Nov 1 00:19:37.427573 ignition[907]: fetch: fetch passed Nov 1 00:19:37.432875 systemd[1]: Finished ignition-fetch.service. Nov 1 00:19:37.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:37.427619 ignition[907]: Ignition finished successfully Nov 1 00:19:37.438358 systemd[1]: Starting ignition-kargs.service... Nov 1 00:19:37.467949 ignition[913]: Ignition 2.14.0 Nov 1 00:19:37.477773 systemd[1]: Finished ignition-kargs.service. Nov 1 00:19:37.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:37.467955 ignition[913]: Stage: kargs Nov 1 00:19:37.483173 systemd[1]: Starting ignition-disks.service... Nov 1 00:19:37.468064 ignition[913]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:19:37.503146 systemd[1]: Finished ignition-disks.service. Nov 1 00:19:37.468081 ignition[913]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:19:37.511058 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:19:37.470702 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:19:37.520275 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:19:37.474068 ignition[913]: kargs: kargs passed Nov 1 00:19:37.528672 systemd[1]: Reached target local-fs.target. Nov 1 00:19:37.474110 ignition[913]: Ignition finished successfully Nov 1 00:19:37.537716 systemd[1]: Reached target sysinit.target. Nov 1 00:19:37.496260 ignition[919]: Ignition 2.14.0 Nov 1 00:19:37.548156 systemd[1]: Reached target basic.target. Nov 1 00:19:37.496266 ignition[919]: Stage: disks Nov 1 00:19:37.557904 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:19:37.496396 ignition[919]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:19:37.496416 ignition[919]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:19:37.499206 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:19:37.501340 ignition[919]: disks: disks passed Nov 1 00:19:37.501395 ignition[919]: Ignition finished successfully Nov 1 00:19:37.647450 systemd-fsck[927]: ROOT: clean, 637/7326000 files, 481087/7359488 blocks Nov 1 00:19:37.654668 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:19:37.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:37.666127 systemd[1]: Mounting sysroot.mount... Nov 1 00:19:37.691943 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:19:37.692465 systemd[1]: Mounted sysroot.mount. Nov 1 00:19:37.696583 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:19:37.737774 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:19:37.742629 systemd[1]: Starting flatcar-metadata-hostname.service... Nov 1 00:19:37.750608 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:19:37.750643 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:19:37.757173 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:19:37.904677 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:19:37.909822 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:19:37.938900 initrd-setup-root[943]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:19:37.953164 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (938) Nov 1 00:19:37.953187 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 1 00:19:37.958312 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:19:37.963066 kernel: BTRFS info (device sda6): has skinny extents Nov 1 00:19:37.971617 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:19:37.982735 initrd-setup-root[969]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:19:38.006541 initrd-setup-root[977]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:19:38.031257 initrd-setup-root[985]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:19:38.704724 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:19:38.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:38.710394 systemd[1]: Starting ignition-mount.service... Nov 1 00:19:38.744156 kernel: kauditd_printk_skb: 3 callbacks suppressed Nov 1 00:19:38.744178 kernel: audit: type=1130 audit(1761956378.709:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:38.744400 systemd[1]: Starting sysroot-boot.service... Nov 1 00:19:38.749463 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Nov 1 00:19:38.749645 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Nov 1 00:19:38.767801 ignition[1004]: INFO : Ignition 2.14.0 Nov 1 00:19:38.767801 ignition[1004]: INFO : Stage: mount Nov 1 00:19:38.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:38.802339 ignition[1004]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:19:38.802339 ignition[1004]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:19:38.802339 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:19:38.802339 ignition[1004]: INFO : mount: mount passed Nov 1 00:19:38.802339 ignition[1004]: INFO : Ignition finished successfully Nov 1 00:19:38.842683 kernel: audit: type=1130 audit(1761956378.782:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:38.777694 systemd[1]: Finished ignition-mount.service. Nov 1 00:19:38.850522 systemd[1]: Finished sysroot-boot.service. Nov 1 00:19:38.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:38.876941 kernel: audit: type=1130 audit(1761956378.854:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:39.390672 coreos-metadata[937]: Nov 01 00:19:39.390 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 1 00:19:39.401265 coreos-metadata[937]: Nov 01 00:19:39.401 INFO Fetch successful Nov 1 00:19:39.436098 coreos-metadata[937]: Nov 01 00:19:39.436 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 1 00:19:39.448332 coreos-metadata[937]: Nov 01 00:19:39.448 INFO Fetch successful Nov 1 00:19:39.492668 coreos-metadata[937]: Nov 01 00:19:39.492 INFO wrote hostname ci-3510.3.8-n-ec0975c3e1 to /sysroot/etc/hostname Nov 1 00:19:39.501831 systemd[1]: Finished flatcar-metadata-hostname.service. Nov 1 00:19:39.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:39.528214 systemd[1]: Starting ignition-files.service... Nov 1 00:19:39.538042 kernel: audit: type=1130 audit(1761956379.506:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:39.537121 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:19:39.564950 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1016) Nov 1 00:19:39.578538 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 1 00:19:39.578586 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:19:39.578596 kernel: BTRFS info (device sda6): has skinny extents Nov 1 00:19:39.592520 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:19:39.613068 ignition[1035]: INFO : Ignition 2.14.0 Nov 1 00:19:39.613068 ignition[1035]: INFO : Stage: files Nov 1 00:19:39.622947 ignition[1035]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:19:39.622947 ignition[1035]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:19:39.643917 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:19:39.643917 ignition[1035]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:19:39.643917 ignition[1035]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:19:39.643917 ignition[1035]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:19:39.773899 ignition[1035]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:19:39.782027 ignition[1035]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:19:39.806167 unknown[1035]: wrote ssh authorized keys file for user: core Nov 1 00:19:39.811595 ignition[1035]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:19:39.822661 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 1 00:19:39.833457 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 1 00:19:39.927352 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:19:40.099426 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 1 00:19:40.129554 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:19:40.139963 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Nov 1 00:19:40.311191 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:19:40.389945 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:19:40.400060 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:19:40.400060 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:19:40.400060 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:19:40.400060 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:19:40.400060 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:19:40.400060 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:19:40.400060 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:19:40.400060 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:19:40.488472 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:19:40.488472 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:19:40.488472 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 1 00:19:40.488472 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 1 00:19:40.488472 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Nov 1 00:19:40.488472 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:19:40.488472 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1486203985" Nov 1 00:19:40.488472 ignition[1035]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1486203985": device or resource busy Nov 1 00:19:40.488472 ignition[1035]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1486203985", trying btrfs: device or resource busy Nov 1 00:19:40.488472 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1486203985" Nov 1 00:19:40.488472 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1486203985" Nov 1 00:19:40.488472 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1486203985" Nov 1 00:19:40.488472 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1486203985" Nov 1 00:19:40.488472 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Nov 1 00:19:40.478859 systemd[1]: mnt-oem1486203985.mount: Deactivated successfully. Nov 1 00:19:40.659437 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Nov 1 00:19:40.659437 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:19:40.659437 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1640577182" Nov 1 00:19:40.659437 ignition[1035]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1640577182": device or resource busy Nov 1 00:19:40.659437 ignition[1035]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1640577182", trying btrfs: device or resource busy Nov 1 00:19:40.659437 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1640577182" Nov 1 00:19:40.659437 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1640577182" Nov 1 00:19:40.659437 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem1640577182" Nov 1 00:19:40.659437 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem1640577182" Nov 1 00:19:40.659437 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Nov 1 00:19:40.659437 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 1 00:19:40.659437 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Nov 1 00:19:41.018008 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Nov 1 00:19:41.242324 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 1 00:19:41.242324 ignition[1035]: INFO : files: op(14): [started] processing unit "waagent.service" Nov 1 00:19:41.242324 ignition[1035]: INFO : files: op(14): [finished] processing unit "waagent.service" Nov 1 00:19:41.242324 ignition[1035]: INFO : files: op(15): [started] processing unit "nvidia.service" Nov 1 00:19:41.242324 ignition[1035]: INFO : files: op(15): [finished] processing unit "nvidia.service" Nov 1 00:19:41.307191 kernel: audit: type=1130 audit(1761956381.266:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.256276 systemd[1]: Finished ignition-files.service. Nov 1 00:19:41.316310 ignition[1035]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Nov 1 00:19:41.316310 ignition[1035]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:19:41.316310 ignition[1035]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:19:41.316310 ignition[1035]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Nov 1 00:19:41.316310 ignition[1035]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:19:41.316310 ignition[1035]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:19:41.316310 ignition[1035]: INFO : files: op(19): [started] setting preset to enabled for "waagent.service" Nov 1 00:19:41.316310 ignition[1035]: INFO : files: op(19): [finished] setting preset to enabled for "waagent.service" Nov 1 00:19:41.316310 ignition[1035]: INFO : files: op(1a): [started] setting preset to enabled for "nvidia.service" Nov 1 00:19:41.316310 ignition[1035]: INFO : files: op(1a): [finished] setting preset to enabled for "nvidia.service" Nov 1 00:19:41.316310 ignition[1035]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:19:41.316310 ignition[1035]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:19:41.316310 ignition[1035]: INFO : files: files passed Nov 1 00:19:41.316310 ignition[1035]: INFO : Ignition finished successfully Nov 1 00:19:41.579561 kernel: audit: type=1130 audit(1761956381.335:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.579599 kernel: audit: type=1131 audit(1761956381.335:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.579609 kernel: audit: type=1130 audit(1761956381.399:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.579619 kernel: audit: type=1130 audit(1761956381.467:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.579628 kernel: audit: type=1131 audit(1761956381.467:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.291979 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:19:41.586160 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:19:41.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.312236 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:19:41.313166 systemd[1]: Starting ignition-quench.service... Nov 1 00:19:41.326047 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:19:41.326176 systemd[1]: Finished ignition-quench.service. Nov 1 00:19:41.392287 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:19:41.399720 systemd[1]: Reached target ignition-complete.target. Nov 1 00:19:41.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.431436 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:19:41.459889 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:19:41.460028 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:19:41.468259 systemd[1]: Reached target initrd-fs.target. Nov 1 00:19:41.521069 systemd[1]: Reached target initrd.target. Nov 1 00:19:41.533485 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:19:41.541050 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:19:41.586045 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:19:41.591955 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:19:41.619150 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:19:41.626110 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:19:41.636515 systemd[1]: Stopped target timers.target. Nov 1 00:19:41.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.645402 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:19:41.645520 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:19:41.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.655828 systemd[1]: Stopped target initrd.target. Nov 1 00:19:41.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.665069 systemd[1]: Stopped target basic.target. Nov 1 00:19:41.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.673647 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:19:41.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.684235 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:19:41.694111 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:19:41.845145 iscsid[881]: iscsid shutting down. Nov 1 00:19:41.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.703076 systemd[1]: Stopped target remote-fs.target. Nov 1 00:19:41.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.873963 ignition[1073]: INFO : Ignition 2.14.0 Nov 1 00:19:41.873963 ignition[1073]: INFO : Stage: umount Nov 1 00:19:41.873963 ignition[1073]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:19:41.873963 ignition[1073]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:19:41.873963 ignition[1073]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:19:41.873963 ignition[1073]: INFO : umount: umount passed Nov 1 00:19:41.873963 ignition[1073]: INFO : Ignition finished successfully Nov 1 00:19:41.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.711787 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:19:41.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.723759 systemd[1]: Stopped target sysinit.target. Nov 1 00:19:41.737076 systemd[1]: Stopped target local-fs.target. Nov 1 00:19:41.746442 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:19:41.756282 systemd[1]: Stopped target swap.target. Nov 1 00:19:41.765429 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:19:41.765554 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:19:41.775027 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:19:42.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.783387 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:19:41.783493 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:19:42.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.793658 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:19:42.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:42.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.793760 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:19:41.803727 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:19:41.803822 systemd[1]: Stopped ignition-files.service. Nov 1 00:19:41.812645 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 00:19:41.812746 systemd[1]: Stopped flatcar-metadata-hostname.service. Nov 1 00:19:42.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.822800 systemd[1]: Stopping ignition-mount.service... Nov 1 00:19:42.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.841995 systemd[1]: Stopping iscsid.service... Nov 1 00:19:42.108000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:19:41.850084 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:19:41.853973 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:19:41.854147 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:19:42.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.859351 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:19:42.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.859493 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:19:42.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.872159 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 00:19:41.872274 systemd[1]: Stopped iscsid.service. Nov 1 00:19:42.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.878369 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:19:41.878467 systemd[1]: Stopped ignition-mount.service. Nov 1 00:19:41.894031 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:19:42.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.896086 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:19:42.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.896161 systemd[1]: Stopped ignition-disks.service. Nov 1 00:19:42.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.919412 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:19:42.263728 kernel: hv_netvsc 002248b6-5350-0022-48b6-5350002248b6 eth0: Data path switched from VF: enP23400s1 Nov 1 00:19:42.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.919482 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:19:42.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:42.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.931718 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:19:42.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.931774 systemd[1]: Stopped ignition-fetch.service. Nov 1 00:19:42.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:42.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.940383 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:19:42.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.940431 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:19:41.951308 systemd[1]: Stopped target paths.target. Nov 1 00:19:41.959983 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:19:42.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:41.964314 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:19:41.973011 systemd[1]: Stopped target slices.target. Nov 1 00:19:41.981554 systemd[1]: Stopped target sockets.target. Nov 1 00:19:41.989590 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:19:41.989644 systemd[1]: Closed iscsid.socket. Nov 1 00:19:41.999846 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:19:41.999896 systemd[1]: Stopped ignition-setup.service. Nov 1 00:19:42.379915 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Nov 1 00:19:42.008541 systemd[1]: Stopping iscsiuio.service... Nov 1 00:19:42.017563 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 00:19:42.017668 systemd[1]: Stopped iscsiuio.service. Nov 1 00:19:42.026891 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:19:42.026993 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:19:42.035263 systemd[1]: Stopped target network.target. Nov 1 00:19:42.046745 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:19:42.046794 systemd[1]: Closed iscsiuio.socket. Nov 1 00:19:42.057683 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:19:42.067110 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:19:42.074937 systemd-networkd[874]: eth0: DHCPv6 lease lost Nov 1 00:19:42.379000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:19:42.076393 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:19:42.076493 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:19:42.089429 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:19:42.089541 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:19:42.100106 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:19:42.100146 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:19:42.110131 systemd[1]: Stopping network-cleanup.service... Nov 1 00:19:42.122858 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:19:42.123039 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:19:42.134790 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:19:42.134846 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:19:42.151294 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:19:42.151352 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:19:42.157045 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:19:42.170350 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:19:42.177170 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:19:42.177327 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:19:42.182716 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:19:42.182767 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:19:42.191371 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:19:42.191409 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:19:42.202494 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:19:42.202552 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:19:42.211014 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:19:42.211061 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:19:42.221188 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:19:42.221242 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:19:42.240831 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:19:42.250254 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:19:42.250335 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Nov 1 00:19:42.263451 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:19:42.263519 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:19:42.268671 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:19:42.268718 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:19:42.274916 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 1 00:19:42.275565 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:19:42.275671 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:19:42.283069 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:19:42.283164 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:19:42.293486 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:19:42.293539 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:19:42.314189 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:19:42.314336 systemd[1]: Stopped network-cleanup.service. Nov 1 00:19:42.321420 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:19:42.332994 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:19:42.350794 systemd[1]: Switching root. Nov 1 00:19:42.381405 systemd-journald[276]: Journal stopped Nov 1 00:19:59.679831 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:19:59.679852 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:19:59.679862 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:19:59.679872 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:19:59.679880 kernel: SELinux: policy capability open_perms=1 Nov 1 00:19:59.679888 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:19:59.679897 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:19:59.679905 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:19:59.679913 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:19:59.679921 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:19:59.679943 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:19:59.679953 kernel: kauditd_printk_skb: 38 callbacks suppressed Nov 1 00:19:59.679961 kernel: audit: type=1403 audit(1761956385.232:82): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:19:59.679971 systemd[1]: Successfully loaded SELinux policy in 410.303ms. Nov 1 00:19:59.679982 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.978ms. Nov 1 00:19:59.679994 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:19:59.680006 systemd[1]: Detected virtualization microsoft. Nov 1 00:19:59.680014 systemd[1]: Detected architecture arm64. Nov 1 00:19:59.680023 systemd[1]: Detected first boot. Nov 1 00:19:59.680032 systemd[1]: Hostname set to . Nov 1 00:19:59.680041 systemd[1]: Initializing machine ID from random generator. Nov 1 00:19:59.680050 kernel: audit: type=1400 audit(1761956386.261:83): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:19:59.680061 kernel: audit: type=1400 audit(1761956386.261:84): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:19:59.680070 kernel: audit: type=1334 audit(1761956386.261:85): prog-id=10 op=LOAD Nov 1 00:19:59.680078 kernel: audit: type=1334 audit(1761956386.261:86): prog-id=10 op=UNLOAD Nov 1 00:19:59.680086 kernel: audit: type=1334 audit(1761956386.279:87): prog-id=11 op=LOAD Nov 1 00:19:59.680095 kernel: audit: type=1334 audit(1761956386.279:88): prog-id=11 op=UNLOAD Nov 1 00:19:59.680103 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:19:59.680112 kernel: audit: type=1400 audit(1761956387.866:89): avc: denied { associate } for pid=1107 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:19:59.680123 kernel: audit: type=1300 audit(1761956387.866:89): arch=c00000b7 syscall=5 success=yes exit=0 a0=40000227ec a1=4000028ac8 a2=4000026d00 a3=32 items=0 ppid=1090 pid=1107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:59.680133 kernel: audit: type=1327 audit(1761956387.866:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:19:59.680141 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:19:59.680151 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:19:59.680160 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:19:59.680170 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:19:59.680180 kernel: kauditd_printk_skb: 6 callbacks suppressed Nov 1 00:19:59.680188 kernel: audit: type=1334 audit(1761956398.812:91): prog-id=12 op=LOAD Nov 1 00:19:59.680197 kernel: audit: type=1334 audit(1761956398.812:92): prog-id=3 op=UNLOAD Nov 1 00:19:59.680206 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:19:59.680214 kernel: audit: type=1334 audit(1761956398.818:93): prog-id=13 op=LOAD Nov 1 00:19:59.680225 systemd[1]: Stopped initrd-switch-root.service. Nov 1 00:19:59.680234 kernel: audit: type=1334 audit(1761956398.823:94): prog-id=14 op=LOAD Nov 1 00:19:59.680243 kernel: audit: type=1334 audit(1761956398.823:95): prog-id=4 op=UNLOAD Nov 1 00:19:59.680253 kernel: audit: type=1334 audit(1761956398.823:96): prog-id=5 op=UNLOAD Nov 1 00:19:59.680262 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:19:59.680272 kernel: audit: type=1131 audit(1761956398.824:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.680281 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:19:59.680289 kernel: audit: type=1334 audit(1761956398.846:98): prog-id=12 op=UNLOAD Nov 1 00:19:59.680298 kernel: audit: type=1130 audit(1761956398.867:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.680307 kernel: audit: type=1131 audit(1761956398.867:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.680317 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:19:59.680326 systemd[1]: Created slice system-getty.slice. Nov 1 00:19:59.680336 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:19:59.680345 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:19:59.680354 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:19:59.680363 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:19:59.680372 systemd[1]: Created slice user.slice. Nov 1 00:19:59.680381 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:19:59.680391 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:19:59.680401 systemd[1]: Set up automount boot.automount. Nov 1 00:19:59.680411 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:19:59.680420 systemd[1]: Stopped target initrd-switch-root.target. Nov 1 00:19:59.680429 systemd[1]: Stopped target initrd-fs.target. Nov 1 00:19:59.680438 systemd[1]: Stopped target initrd-root-fs.target. Nov 1 00:19:59.680447 systemd[1]: Reached target integritysetup.target. Nov 1 00:19:59.680457 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:19:59.680466 systemd[1]: Reached target remote-fs.target. Nov 1 00:19:59.680476 systemd[1]: Reached target slices.target. Nov 1 00:19:59.680485 systemd[1]: Reached target swap.target. Nov 1 00:19:59.680494 systemd[1]: Reached target torcx.target. Nov 1 00:19:59.680504 systemd[1]: Reached target veritysetup.target. Nov 1 00:19:59.680513 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:19:59.680522 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:19:59.680531 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:19:59.680542 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:19:59.680551 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:19:59.680560 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:19:59.680569 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:19:59.680578 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:19:59.680589 systemd[1]: Mounting media.mount... Nov 1 00:19:59.680599 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:19:59.680608 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:19:59.680618 systemd[1]: Mounting tmp.mount... Nov 1 00:19:59.680627 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:19:59.680636 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:19:59.680646 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:19:59.680655 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:19:59.680664 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:19:59.680673 systemd[1]: Starting modprobe@drm.service... Nov 1 00:19:59.680684 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:19:59.680693 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:19:59.680702 systemd[1]: Starting modprobe@loop.service... Nov 1 00:19:59.680712 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:19:59.680721 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:19:59.680730 systemd[1]: Stopped systemd-fsck-root.service. Nov 1 00:19:59.680739 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:19:59.680748 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:19:59.680758 systemd[1]: Stopped systemd-journald.service. Nov 1 00:19:59.680768 systemd[1]: systemd-journald.service: Consumed 3.017s CPU time. Nov 1 00:19:59.680777 systemd[1]: Starting systemd-journald.service... Nov 1 00:19:59.680788 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:19:59.680796 kernel: fuse: init (API version 7.34) Nov 1 00:19:59.680805 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:19:59.680815 kernel: loop: module loaded Nov 1 00:19:59.680824 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:19:59.680833 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:19:59.680842 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:19:59.680852 systemd[1]: Stopped verity-setup.service. Nov 1 00:19:59.680862 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:19:59.680871 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:19:59.680880 systemd[1]: Mounted media.mount. Nov 1 00:19:59.680889 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:19:59.680898 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:19:59.680907 systemd[1]: Mounted tmp.mount. Nov 1 00:19:59.680916 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:19:59.680936 systemd-journald[1209]: Journal started Nov 1 00:19:59.680977 systemd-journald[1209]: Runtime Journal (/run/log/journal/5d7beecd27c54ea88ae51433c33ac2ca) is 8.0M, max 78.5M, 70.5M free. Nov 1 00:19:45.232000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:19:46.261000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:19:46.261000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:19:46.261000 audit: BPF prog-id=10 op=LOAD Nov 1 00:19:46.261000 audit: BPF prog-id=10 op=UNLOAD Nov 1 00:19:46.279000 audit: BPF prog-id=11 op=LOAD Nov 1 00:19:46.279000 audit: BPF prog-id=11 op=UNLOAD Nov 1 00:19:47.866000 audit[1107]: AVC avc: denied { associate } for pid=1107 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:19:47.866000 audit[1107]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40000227ec a1=4000028ac8 a2=4000026d00 a3=32 items=0 ppid=1090 pid=1107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:47.866000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:19:47.881000 audit[1107]: AVC avc: denied { associate } for pid=1107 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 00:19:47.881000 audit[1107]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000228c9 a2=1ed a3=0 items=2 ppid=1090 pid=1107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:47.881000 audit: CWD cwd="/" Nov 1 00:19:47.881000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:19:47.881000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:19:47.881000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:19:58.812000 audit: BPF prog-id=12 op=LOAD Nov 1 00:19:58.812000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:19:58.818000 audit: BPF prog-id=13 op=LOAD Nov 1 00:19:58.823000 audit: BPF prog-id=14 op=LOAD Nov 1 00:19:58.823000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:19:58.823000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:19:58.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:58.846000 audit: BPF prog-id=12 op=UNLOAD Nov 1 00:19:58.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:58.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.507000 audit: BPF prog-id=15 op=LOAD Nov 1 00:19:59.507000 audit: BPF prog-id=16 op=LOAD Nov 1 00:19:59.507000 audit: BPF prog-id=17 op=LOAD Nov 1 00:19:59.507000 audit: BPF prog-id=13 op=UNLOAD Nov 1 00:19:59.507000 audit: BPF prog-id=14 op=UNLOAD Nov 1 00:19:59.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.674000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:19:59.674000 audit[1209]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd3626aa0 a2=4000 a3=1 items=0 ppid=1 pid=1209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:59.674000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:19:58.811296 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:19:47.778809 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:19:58.811308 systemd[1]: Unnecessary job was removed for dev-sda6.device. Nov 1 00:19:47.813472 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:47Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:19:58.825522 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:19:47.813495 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:47Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:19:58.825873 systemd[1]: systemd-journald.service: Consumed 3.017s CPU time. Nov 1 00:19:47.813536 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:47Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Nov 1 00:19:47.813548 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:47Z" level=debug msg="skipped missing lower profile" missing profile=oem Nov 1 00:19:47.813588 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:47Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Nov 1 00:19:47.813602 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:47Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Nov 1 00:19:47.813806 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:47Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Nov 1 00:19:47.813839 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:47Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:19:47.813850 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:47Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:19:47.847849 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:47Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Nov 1 00:19:47.847910 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:47Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Nov 1 00:19:47.847966 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:47Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Nov 1 00:19:47.847993 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:47Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Nov 1 00:19:47.848017 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:47Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Nov 1 00:19:47.848030 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:47Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Nov 1 00:19:54.640807 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:54Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:19:54.641085 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:54Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:19:54.641181 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:54Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:19:54.641338 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:54Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:19:54.641388 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:54Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Nov 1 00:19:54.641442 /usr/lib/systemd/system-generators/torcx-generator[1107]: time="2025-11-01T00:19:54Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Nov 1 00:19:59.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.696196 systemd[1]: Started systemd-journald.service. Nov 1 00:19:59.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.697208 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:19:59.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.702677 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:19:59.702811 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:19:59.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.708060 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:19:59.708186 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:19:59.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.713411 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:19:59.713537 systemd[1]: Finished modprobe@drm.service. Nov 1 00:19:59.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.718406 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:19:59.718529 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:19:59.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.723668 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:19:59.723791 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:19:59.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.728525 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:19:59.728647 systemd[1]: Finished modprobe@loop.service. Nov 1 00:19:59.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.733806 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:19:59.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.739518 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:19:59.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.744696 systemd[1]: Reached target network-pre.target. Nov 1 00:19:59.750595 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:19:59.756297 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:19:59.760458 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:19:59.796061 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:19:59.801692 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:19:59.806028 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:19:59.807187 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:19:59.811829 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:19:59.813088 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:19:59.819284 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:19:59.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.824394 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:19:59.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.830353 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:19:59.835878 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:19:59.841673 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:19:59.846658 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:19:59.855965 udevadm[1227]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:19:59.875835 systemd-journald[1209]: Time spent on flushing to /var/log/journal/5d7beecd27c54ea88ae51433c33ac2ca is 14.701ms for 1103 entries. Nov 1 00:19:59.875835 systemd-journald[1209]: System Journal (/var/log/journal/5d7beecd27c54ea88ae51433c33ac2ca) is 8.0M, max 2.6G, 2.6G free. Nov 1 00:20:00.003627 systemd-journald[1209]: Received client request to flush runtime journal. Nov 1 00:19:59.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:19:59.897395 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:19:59.902497 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:20:00.004654 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:20:00.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:00.094793 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:20:00.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:00.839015 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:20:00.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:00.845027 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:20:02.038263 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:20:02.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:02.125345 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:20:02.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:02.130000 audit: BPF prog-id=18 op=LOAD Nov 1 00:20:02.130000 audit: BPF prog-id=19 op=LOAD Nov 1 00:20:02.130000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:20:02.130000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:20:02.131719 systemd[1]: Starting systemd-udevd.service... Nov 1 00:20:02.149843 systemd-udevd[1232]: Using default interface naming scheme 'v252'. Nov 1 00:20:03.727743 systemd[1]: Started systemd-udevd.service. Nov 1 00:20:03.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:03.735000 audit: BPF prog-id=20 op=LOAD Nov 1 00:20:03.737153 systemd[1]: Starting systemd-networkd.service... Nov 1 00:20:03.764383 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Nov 1 00:20:03.846275 kernel: kauditd_printk_skb: 44 callbacks suppressed Nov 1 00:20:03.846384 kernel: audit: type=1334 audit(1761956403.829:143): prog-id=21 op=LOAD Nov 1 00:20:03.829000 audit: BPF prog-id=21 op=LOAD Nov 1 00:20:03.831145 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:20:03.854750 kernel: audit: type=1334 audit(1761956403.829:144): prog-id=22 op=LOAD Nov 1 00:20:03.829000 audit: BPF prog-id=22 op=LOAD Nov 1 00:20:03.862442 kernel: audit: type=1334 audit(1761956403.829:145): prog-id=23 op=LOAD Nov 1 00:20:03.829000 audit: BPF prog-id=23 op=LOAD Nov 1 00:20:03.864943 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:20:03.893195 kernel: audit: type=1400 audit(1761956403.871:146): avc: denied { confidentiality } for pid=1236 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:20:03.893315 kernel: hv_vmbus: registering driver hv_balloon Nov 1 00:20:03.871000 audit[1236]: AVC avc: denied { confidentiality } for pid=1236 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:20:03.908530 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 1 00:20:03.908628 kernel: hv_balloon: Memory hot add disabled on ARM64 Nov 1 00:20:03.909995 kernel: audit: type=1300 audit(1761956403.871:146): arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaaf348aac0 a1=aa2c a2=ffff9e8324b0 a3=aaaaf33eb010 items=12 ppid=1232 pid=1236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:20:03.871000 audit[1236]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaaf348aac0 a1=aa2c a2=ffff9e8324b0 a3=aaaaf33eb010 items=12 ppid=1232 pid=1236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:20:03.871000 audit: CWD cwd="/" Nov 1 00:20:03.944292 kernel: audit: type=1307 audit(1761956403.871:146): cwd="/" Nov 1 00:20:03.871000 audit: PATH item=0 name=(null) inode=7231 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:20:03.964016 kernel: audit: type=1302 audit(1761956403.871:146): item=0 name=(null) inode=7231 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:20:03.871000 audit: PATH item=1 name=(null) inode=9143 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:20:03.971037 kernel: hv_vmbus: registering driver hyperv_fb Nov 1 00:20:03.971123 kernel: audit: type=1302 audit(1761956403.871:146): item=1 name=(null) inode=9143 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:20:03.993885 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 1 00:20:03.994016 kernel: audit: type=1302 audit(1761956403.871:146): item=2 name=(null) inode=9143 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:20:03.871000 audit: PATH item=2 name=(null) inode=9143 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:20:04.018218 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 1 00:20:04.018356 kernel: audit: type=1302 audit(1761956403.871:146): item=3 name=(null) inode=9144 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:20:03.871000 audit: PATH item=3 name=(null) inode=9144 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:20:04.023565 systemd[1]: Started systemd-userdbd.service. Nov 1 00:20:04.044959 kernel: hv_utils: Registering HyperV Utility Driver Nov 1 00:20:03.871000 audit: PATH item=4 name=(null) inode=9143 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:20:03.871000 audit: PATH item=5 name=(null) inode=9145 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:20:03.871000 audit: PATH item=6 name=(null) inode=9143 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:20:03.871000 audit: PATH item=7 name=(null) inode=9146 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:20:03.871000 audit: PATH item=8 name=(null) inode=9143 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:20:03.871000 audit: PATH item=9 name=(null) inode=9147 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:20:03.871000 audit: PATH item=10 name=(null) inode=9143 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:20:03.871000 audit: PATH item=11 name=(null) inode=9148 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:20:03.871000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 00:20:04.050980 kernel: Console: switching to colour dummy device 80x25 Nov 1 00:20:04.051055 kernel: hv_vmbus: registering driver hv_utils Nov 1 00:20:04.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:04.059948 kernel: hv_utils: Heartbeat IC version 3.0 Nov 1 00:20:04.060031 kernel: hv_utils: Shutdown IC version 3.2 Nov 1 00:20:04.062947 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 00:20:04.063032 kernel: hv_utils: TimeSync IC version 4.0 Nov 1 00:20:03.933226 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:20:03.998250 systemd-journald[1209]: Time jumped backwards, rotating. Nov 1 00:20:03.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:03.944602 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:20:03.950830 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:20:04.277910 lvm[1308]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:20:04.340575 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:20:04.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:04.345733 systemd[1]: Reached target cryptsetup.target. Nov 1 00:20:04.351552 systemd[1]: Starting lvm2-activation.service... Nov 1 00:20:04.355533 lvm[1310]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:20:04.378588 systemd[1]: Finished lvm2-activation.service. Nov 1 00:20:04.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:04.393117 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:20:04.398165 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:20:04.398194 systemd[1]: Reached target local-fs.target. Nov 1 00:20:04.409866 systemd[1]: Reached target machines.target. Nov 1 00:20:04.415569 systemd[1]: Starting ldconfig.service... Nov 1 00:20:04.456601 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:20:04.456673 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:20:04.457869 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:20:04.463285 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:20:04.470389 systemd-networkd[1250]: lo: Link UP Nov 1 00:20:04.470396 systemd-networkd[1250]: lo: Gained carrier Nov 1 00:20:04.470551 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:20:04.471199 systemd-networkd[1250]: Enumeration completed Nov 1 00:20:04.476474 systemd[1]: Starting systemd-sysext.service... Nov 1 00:20:04.480888 systemd[1]: Started systemd-networkd.service. Nov 1 00:20:04.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:04.487357 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:20:04.558369 systemd-networkd[1250]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:20:04.569521 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1312 (bootctl) Nov 1 00:20:04.570932 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:20:04.593200 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:20:04.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:04.610615 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:20:04.614705 kernel: mlx5_core 5b68:00:02.0 enP23400s1: Link up Nov 1 00:20:04.623712 kernel: buffer_size[0]=0 is not enough for lossless buffer Nov 1 00:20:04.656724 kernel: hv_netvsc 002248b6-5350-0022-48b6-5350002248b6 eth0: Data path switched to VF: enP23400s1 Nov 1 00:20:04.657275 systemd-networkd[1250]: enP23400s1: Link UP Nov 1 00:20:04.657596 systemd-networkd[1250]: eth0: Link UP Nov 1 00:20:04.657606 systemd-networkd[1250]: eth0: Gained carrier Nov 1 00:20:04.663254 systemd-networkd[1250]: enP23400s1: Gained carrier Nov 1 00:20:04.667547 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:20:04.667877 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:20:04.673828 systemd-networkd[1250]: eth0: DHCPv4 address 10.200.20.48/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 1 00:20:04.691060 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:20:04.692455 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:20:04.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:04.753989 kernel: loop0: detected capacity change from 0 to 200800 Nov 1 00:20:04.821714 systemd-fsck[1320]: fsck.fat 4.2 (2021-01-31) Nov 1 00:20:04.821714 systemd-fsck[1320]: /dev/sda1: 236 files, 117310/258078 clusters Nov 1 00:20:04.823511 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:20:04.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:04.838737 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:20:04.839404 systemd[1]: Mounting boot.mount... Nov 1 00:20:04.859767 kernel: loop1: detected capacity change from 0 to 200800 Nov 1 00:20:04.860878 systemd[1]: Mounted boot.mount. Nov 1 00:20:04.885408 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:20:04.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:04.896713 (sd-sysext)[1329]: Using extensions 'kubernetes'. Nov 1 00:20:04.897066 (sd-sysext)[1329]: Merged extensions into '/usr'. Nov 1 00:20:04.914490 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:20:04.918729 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:20:04.920118 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:20:04.926049 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:20:04.931835 systemd[1]: Starting modprobe@loop.service... Nov 1 00:20:04.935989 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:20:04.936131 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:20:04.938592 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:20:04.943402 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:20:04.943542 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:20:04.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:04.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:04.948453 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:20:04.948580 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:20:04.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:04.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:04.954236 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:20:04.954354 systemd[1]: Finished modprobe@loop.service. Nov 1 00:20:04.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:04.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:04.959487 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:20:04.959590 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:20:04.960532 systemd[1]: Finished systemd-sysext.service. Nov 1 00:20:04.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:04.966310 systemd[1]: Starting ensure-sysext.service... Nov 1 00:20:04.971669 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:20:04.982122 systemd[1]: Reloading. Nov 1 00:20:05.001761 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:20:05.034176 /usr/lib/systemd/system-generators/torcx-generator[1355]: time="2025-11-01T00:20:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:20:05.034496 /usr/lib/systemd/system-generators/torcx-generator[1355]: time="2025-11-01T00:20:05Z" level=info msg="torcx already run" Nov 1 00:20:05.038296 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:20:05.055483 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:20:05.118785 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:20:05.118805 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:20:05.134458 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:20:05.198000 audit: BPF prog-id=24 op=LOAD Nov 1 00:20:05.198000 audit: BPF prog-id=15 op=UNLOAD Nov 1 00:20:05.198000 audit: BPF prog-id=25 op=LOAD Nov 1 00:20:05.198000 audit: BPF prog-id=26 op=LOAD Nov 1 00:20:05.198000 audit: BPF prog-id=16 op=UNLOAD Nov 1 00:20:05.198000 audit: BPF prog-id=17 op=UNLOAD Nov 1 00:20:05.199000 audit: BPF prog-id=27 op=LOAD Nov 1 00:20:05.199000 audit: BPF prog-id=20 op=UNLOAD Nov 1 00:20:05.199000 audit: BPF prog-id=28 op=LOAD Nov 1 00:20:05.199000 audit: BPF prog-id=21 op=UNLOAD Nov 1 00:20:05.199000 audit: BPF prog-id=29 op=LOAD Nov 1 00:20:05.199000 audit: BPF prog-id=30 op=LOAD Nov 1 00:20:05.199000 audit: BPF prog-id=22 op=UNLOAD Nov 1 00:20:05.199000 audit: BPF prog-id=23 op=UNLOAD Nov 1 00:20:05.201000 audit: BPF prog-id=31 op=LOAD Nov 1 00:20:05.201000 audit: BPF prog-id=32 op=LOAD Nov 1 00:20:05.201000 audit: BPF prog-id=18 op=UNLOAD Nov 1 00:20:05.201000 audit: BPF prog-id=19 op=UNLOAD Nov 1 00:20:05.219523 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:20:05.221262 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:20:05.227234 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:20:05.233952 systemd[1]: Starting modprobe@loop.service... Nov 1 00:20:05.238904 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:20:05.239038 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:20:05.239895 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:20:05.240047 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:20:05.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.246168 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:20:05.246298 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:20:05.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.252090 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:20:05.252217 systemd[1]: Finished modprobe@loop.service. Nov 1 00:20:05.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.258679 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:20:05.260087 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:20:05.266041 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:20:05.272039 systemd[1]: Starting modprobe@loop.service... Nov 1 00:20:05.276428 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:20:05.276565 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:20:05.277417 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:20:05.277561 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:20:05.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.283084 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:20:05.283357 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:20:05.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.289497 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:20:05.289624 systemd[1]: Finished modprobe@loop.service. Nov 1 00:20:05.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.297048 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:20:05.298390 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:20:05.304249 systemd[1]: Starting modprobe@drm.service... Nov 1 00:20:05.309600 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:20:05.315844 systemd[1]: Starting modprobe@loop.service... Nov 1 00:20:05.320220 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:20:05.320359 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:20:05.321334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:20:05.321475 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:20:05.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.326907 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:20:05.327038 systemd[1]: Finished modprobe@drm.service. Nov 1 00:20:05.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.332557 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:20:05.332684 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:20:05.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.338299 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:20:05.338426 systemd[1]: Finished modprobe@loop.service. Nov 1 00:20:05.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:05.344066 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:20:05.344138 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:20:05.345234 systemd[1]: Finished ensure-sysext.service. Nov 1 00:20:05.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:06.545868 systemd-networkd[1250]: eth0: Gained IPv6LL Nov 1 00:20:06.550630 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:20:06.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:08.996376 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:20:09.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:09.006089 kernel: kauditd_printk_skb: 65 callbacks suppressed Nov 1 00:20:09.006136 kernel: audit: type=1130 audit(1761956409.000:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:09.007520 systemd[1]: Starting audit-rules.service... Nov 1 00:20:09.029034 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:20:09.035386 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:20:09.042000 audit: BPF prog-id=33 op=LOAD Nov 1 00:20:09.044772 systemd[1]: Starting systemd-resolved.service... Nov 1 00:20:09.054470 kernel: audit: type=1334 audit(1761956409.042:204): prog-id=33 op=LOAD Nov 1 00:20:09.053000 audit: BPF prog-id=34 op=LOAD Nov 1 00:20:09.056503 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:20:09.065509 kernel: audit: type=1334 audit(1761956409.053:205): prog-id=34 op=LOAD Nov 1 00:20:09.067444 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:20:09.129000 audit[1432]: SYSTEM_BOOT pid=1432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:20:09.134863 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:20:09.154117 kernel: audit: type=1127 audit(1761956409.129:206): pid=1432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:20:09.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:09.175254 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:20:09.180254 kernel: audit: type=1130 audit(1761956409.153:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:09.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:09.181228 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:20:09.199772 kernel: audit: type=1130 audit(1761956409.178:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:09.246392 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:20:09.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:09.251940 systemd[1]: Reached target time-set.target. Nov 1 00:20:09.274358 kernel: audit: type=1130 audit(1761956409.250:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:09.286397 systemd-resolved[1430]: Positive Trust Anchors: Nov 1 00:20:09.286419 systemd-resolved[1430]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:20:09.286473 systemd-resolved[1430]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:20:09.382021 systemd-resolved[1430]: Using system hostname 'ci-3510.3.8-n-ec0975c3e1'. Nov 1 00:20:09.383501 systemd[1]: Started systemd-resolved.service. Nov 1 00:20:09.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:09.389461 systemd[1]: Reached target network.target. Nov 1 00:20:09.414340 kernel: audit: type=1130 audit(1761956409.387:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:09.415218 systemd[1]: Reached target network-online.target. Nov 1 00:20:09.421563 systemd[1]: Reached target nss-lookup.target. Nov 1 00:20:09.501335 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:20:09.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:09.527708 kernel: audit: type=1130 audit(1761956409.505:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:20:09.692541 systemd-timesyncd[1431]: Contacted time server 24.229.44.105:123 (0.flatcar.pool.ntp.org). Nov 1 00:20:09.692871 systemd-timesyncd[1431]: Initial clock synchronization to Sat 2025-11-01 00:20:09.689069 UTC. Nov 1 00:20:09.773000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:20:09.773000 audit[1447]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffde7d9920 a2=420 a3=0 items=0 ppid=1426 pid=1447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:20:09.773000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:20:09.787705 kernel: audit: type=1305 audit(1761956409.773:212): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:20:09.792805 augenrules[1447]: No rules Nov 1 00:20:09.793537 systemd[1]: Finished audit-rules.service. Nov 1 00:20:17.896432 ldconfig[1311]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:20:17.909189 systemd[1]: Finished ldconfig.service. Nov 1 00:20:17.915523 systemd[1]: Starting systemd-update-done.service... Nov 1 00:20:18.050032 systemd[1]: Finished systemd-update-done.service. Nov 1 00:20:18.055217 systemd[1]: Reached target sysinit.target. Nov 1 00:20:18.059950 systemd[1]: Started motdgen.path. Nov 1 00:20:18.064622 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:20:18.071408 systemd[1]: Started logrotate.timer. Nov 1 00:20:18.075501 systemd[1]: Started mdadm.timer. Nov 1 00:20:18.079532 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:20:18.084332 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:20:18.084359 systemd[1]: Reached target paths.target. Nov 1 00:20:18.088640 systemd[1]: Reached target timers.target. Nov 1 00:20:18.094751 systemd[1]: Listening on dbus.socket. Nov 1 00:20:18.100349 systemd[1]: Starting docker.socket... Nov 1 00:20:18.142575 systemd[1]: Listening on sshd.socket. Nov 1 00:20:18.147726 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:20:18.148256 systemd[1]: Listening on docker.socket. Nov 1 00:20:18.153065 systemd[1]: Reached target sockets.target. Nov 1 00:20:18.157980 systemd[1]: Reached target basic.target. Nov 1 00:20:18.162589 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:20:18.162618 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:20:18.163828 systemd[1]: Starting containerd.service... Nov 1 00:20:18.169035 systemd[1]: Starting dbus.service... Nov 1 00:20:18.173982 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:20:18.180162 systemd[1]: Starting extend-filesystems.service... Nov 1 00:20:18.184736 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:20:18.210653 systemd[1]: Starting kubelet.service... Nov 1 00:20:18.215511 systemd[1]: Starting motdgen.service... Nov 1 00:20:18.220166 systemd[1]: Started nvidia.service. Nov 1 00:20:18.225978 systemd[1]: Starting prepare-helm.service... Nov 1 00:20:18.231093 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:20:18.237305 systemd[1]: Starting sshd-keygen.service... Nov 1 00:20:18.243657 systemd[1]: Starting systemd-logind.service... Nov 1 00:20:18.247713 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:20:18.247790 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:20:18.248331 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:20:18.249109 systemd[1]: Starting update-engine.service... Nov 1 00:20:18.255402 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:20:18.265132 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:20:18.265307 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:20:18.287987 jq[1457]: false Nov 1 00:20:18.288379 jq[1470]: true Nov 1 00:20:18.308754 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:20:18.308923 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:20:18.325709 extend-filesystems[1458]: Found loop1 Nov 1 00:20:18.325709 extend-filesystems[1458]: Found sda Nov 1 00:20:18.335022 extend-filesystems[1458]: Found sda1 Nov 1 00:20:18.335022 extend-filesystems[1458]: Found sda2 Nov 1 00:20:18.335022 extend-filesystems[1458]: Found sda3 Nov 1 00:20:18.335022 extend-filesystems[1458]: Found usr Nov 1 00:20:18.335022 extend-filesystems[1458]: Found sda4 Nov 1 00:20:18.335022 extend-filesystems[1458]: Found sda6 Nov 1 00:20:18.335022 extend-filesystems[1458]: Found sda7 Nov 1 00:20:18.335022 extend-filesystems[1458]: Found sda9 Nov 1 00:20:18.335022 extend-filesystems[1458]: Checking size of /dev/sda9 Nov 1 00:20:18.389044 jq[1480]: true Nov 1 00:20:18.383021 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:20:18.383197 systemd[1]: Finished motdgen.service. Nov 1 00:20:18.430011 systemd-logind[1467]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Nov 1 00:20:18.430445 systemd-logind[1467]: New seat seat0. Nov 1 00:20:18.476727 env[1490]: time="2025-11-01T00:20:18.475092249Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:20:18.478145 tar[1473]: linux-arm64/LICENSE Nov 1 00:20:18.478392 tar[1473]: linux-arm64/helm Nov 1 00:20:18.490169 extend-filesystems[1458]: Old size kept for /dev/sda9 Nov 1 00:20:18.490169 extend-filesystems[1458]: Found sr0 Nov 1 00:20:18.490019 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:20:18.490177 systemd[1]: Finished extend-filesystems.service. Nov 1 00:20:18.538186 env[1490]: time="2025-11-01T00:20:18.538125362Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:20:18.539819 env[1490]: time="2025-11-01T00:20:18.539780372Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:18.541018 env[1490]: time="2025-11-01T00:20:18.540977714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:20:18.541083 env[1490]: time="2025-11-01T00:20:18.541020510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:18.543465 env[1490]: time="2025-11-01T00:20:18.543431712Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:20:18.543576 env[1490]: time="2025-11-01T00:20:18.543560857Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:18.544220 env[1490]: time="2025-11-01T00:20:18.544198864Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:20:18.544274 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:20:18.544521 env[1490]: time="2025-11-01T00:20:18.544499949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:18.544971 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:20:18.551684 env[1490]: time="2025-11-01T00:20:18.551651807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:18.552020 env[1490]: time="2025-11-01T00:20:18.551999327Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:18.555221 env[1490]: time="2025-11-01T00:20:18.555189241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:20:18.555968 env[1490]: time="2025-11-01T00:20:18.555946753Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:20:18.556138 env[1490]: time="2025-11-01T00:20:18.556120214Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:20:18.556857 env[1490]: time="2025-11-01T00:20:18.556831692Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:20:18.587942 env[1490]: time="2025-11-01T00:20:18.587390459Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:20:18.587942 env[1490]: time="2025-11-01T00:20:18.587440173Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:20:18.587942 env[1490]: time="2025-11-01T00:20:18.587453211Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:20:18.587942 env[1490]: time="2025-11-01T00:20:18.587494647Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:20:18.587942 env[1490]: time="2025-11-01T00:20:18.587512285Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:20:18.587942 env[1490]: time="2025-11-01T00:20:18.587528163Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:20:18.587942 env[1490]: time="2025-11-01T00:20:18.587541081Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:20:18.589283 env[1490]: time="2025-11-01T00:20:18.588273077Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:20:18.589283 env[1490]: time="2025-11-01T00:20:18.588331230Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:20:18.589283 env[1490]: time="2025-11-01T00:20:18.588347229Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:20:18.589283 env[1490]: time="2025-11-01T00:20:18.588361987Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:20:18.589283 env[1490]: time="2025-11-01T00:20:18.588378225Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:20:18.589283 env[1490]: time="2025-11-01T00:20:18.588507930Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:20:18.589283 env[1490]: time="2025-11-01T00:20:18.588575882Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:20:18.589283 env[1490]: time="2025-11-01T00:20:18.588819494Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:20:18.589283 env[1490]: time="2025-11-01T00:20:18.588845331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:20:18.589283 env[1490]: time="2025-11-01T00:20:18.588859250Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:20:18.589283 env[1490]: time="2025-11-01T00:20:18.588912164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:20:18.589283 env[1490]: time="2025-11-01T00:20:18.588925642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:20:18.589283 env[1490]: time="2025-11-01T00:20:18.588937561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:20:18.589283 env[1490]: time="2025-11-01T00:20:18.588949679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:20:18.589582 env[1490]: time="2025-11-01T00:20:18.588961918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:20:18.589582 env[1490]: time="2025-11-01T00:20:18.588973916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:20:18.589582 env[1490]: time="2025-11-01T00:20:18.588985075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:20:18.589582 env[1490]: time="2025-11-01T00:20:18.588997754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:20:18.589582 env[1490]: time="2025-11-01T00:20:18.589011192Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:20:18.589582 env[1490]: time="2025-11-01T00:20:18.589127139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:20:18.589582 env[1490]: time="2025-11-01T00:20:18.589142937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:20:18.589582 env[1490]: time="2025-11-01T00:20:18.589155696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:20:18.589582 env[1490]: time="2025-11-01T00:20:18.589167134Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:20:18.589582 env[1490]: time="2025-11-01T00:20:18.589181733Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:20:18.589582 env[1490]: time="2025-11-01T00:20:18.589192251Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:20:18.589582 env[1490]: time="2025-11-01T00:20:18.589209609Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:20:18.590281 env[1490]: time="2025-11-01T00:20:18.589244085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:20:18.590738 env[1490]: time="2025-11-01T00:20:18.590655083Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:20:18.610230 env[1490]: time="2025-11-01T00:20:18.591318647Z" level=info msg="Connect containerd service" Nov 1 00:20:18.610230 env[1490]: time="2025-11-01T00:20:18.591367121Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:20:18.610230 env[1490]: time="2025-11-01T00:20:18.592436918Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:20:18.610230 env[1490]: time="2025-11-01T00:20:18.592590141Z" level=info msg="Start subscribing containerd event" Nov 1 00:20:18.610230 env[1490]: time="2025-11-01T00:20:18.592646734Z" level=info msg="Start recovering state" Nov 1 00:20:18.610230 env[1490]: time="2025-11-01T00:20:18.592754962Z" level=info msg="Start event monitor" Nov 1 00:20:18.610230 env[1490]: time="2025-11-01T00:20:18.592778799Z" level=info msg="Start snapshots syncer" Nov 1 00:20:18.610230 env[1490]: time="2025-11-01T00:20:18.592789638Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:20:18.610230 env[1490]: time="2025-11-01T00:20:18.592798277Z" level=info msg="Start streaming server" Nov 1 00:20:18.610230 env[1490]: time="2025-11-01T00:20:18.593209150Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:20:18.610230 env[1490]: time="2025-11-01T00:20:18.593260304Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:20:18.610230 env[1490]: time="2025-11-01T00:20:18.603298230Z" level=info msg="containerd successfully booted in 0.132855s" Nov 1 00:20:18.593401 systemd[1]: Started containerd.service. Nov 1 00:20:18.690939 systemd[1]: nvidia.service: Deactivated successfully. Nov 1 00:20:18.987001 dbus-daemon[1456]: [system] SELinux support is enabled Nov 1 00:20:18.987177 systemd[1]: Started dbus.service. Nov 1 00:20:18.992896 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:20:18.992919 systemd[1]: Reached target system-config.target. Nov 1 00:20:18.998612 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:20:18.998631 systemd[1]: Reached target user-config.target. Nov 1 00:20:19.004103 systemd[1]: Started systemd-logind.service. Nov 1 00:20:19.139023 update_engine[1468]: I1101 00:20:19.117502 1468 main.cc:92] Flatcar Update Engine starting Nov 1 00:20:19.245004 systemd[1]: Started update-engine.service. Nov 1 00:20:19.251348 update_engine[1468]: I1101 00:20:19.245049 1468 update_check_scheduler.cc:74] Next update check in 4m51s Nov 1 00:20:19.251868 systemd[1]: Started locksmithd.service. Nov 1 00:20:19.318305 tar[1473]: linux-arm64/README.md Nov 1 00:20:19.323301 systemd[1]: Finished prepare-helm.service. Nov 1 00:20:19.346548 systemd[1]: Started kubelet.service. Nov 1 00:20:19.764293 kubelet[1565]: E1101 00:20:19.764224 1565 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:20:19.765990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:20:19.766122 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:20:20.905613 locksmithd[1561]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:20:23.518659 sshd_keygen[1478]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:20:23.535993 systemd[1]: Finished sshd-keygen.service. Nov 1 00:20:23.542772 systemd[1]: Starting issuegen.service... Nov 1 00:20:23.547592 systemd[1]: Started waagent.service. Nov 1 00:20:23.552521 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:20:23.552768 systemd[1]: Finished issuegen.service. Nov 1 00:20:23.558732 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:20:23.615390 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:20:23.621949 systemd[1]: Started getty@tty1.service. Nov 1 00:20:23.627621 systemd[1]: Started serial-getty@ttyAMA0.service. Nov 1 00:20:23.638071 systemd[1]: Reached target getty.target. Nov 1 00:20:23.642417 systemd[1]: Reached target multi-user.target. Nov 1 00:20:23.648471 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:20:23.656562 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:20:23.656759 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:20:23.663377 systemd[1]: Startup finished in 744ms (kernel) + 15.909s (initrd) + 39.533s (userspace) = 56.188s. Nov 1 00:20:25.191583 login[1589]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Nov 1 00:20:25.223771 login[1588]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 00:20:25.442586 systemd[1]: Created slice user-500.slice. Nov 1 00:20:25.443747 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:20:25.445999 systemd-logind[1467]: New session 2 of user core. Nov 1 00:20:25.505053 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:20:25.506573 systemd[1]: Starting user@500.service... Nov 1 00:20:25.620210 (systemd)[1592]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:20:26.129668 systemd[1592]: Queued start job for default target default.target. Nov 1 00:20:26.130226 systemd[1592]: Reached target paths.target. Nov 1 00:20:26.130246 systemd[1592]: Reached target sockets.target. Nov 1 00:20:26.130257 systemd[1592]: Reached target timers.target. Nov 1 00:20:26.130267 systemd[1592]: Reached target basic.target. Nov 1 00:20:26.130312 systemd[1592]: Reached target default.target. Nov 1 00:20:26.130335 systemd[1592]: Startup finished in 503ms. Nov 1 00:20:26.130380 systemd[1]: Started user@500.service. Nov 1 00:20:26.131315 systemd[1]: Started session-2.scope. Nov 1 00:20:26.193026 login[1589]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 00:20:26.197244 systemd[1]: Started session-1.scope. Nov 1 00:20:26.197739 systemd-logind[1467]: New session 1 of user core. Nov 1 00:20:29.991624 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:20:29.991814 systemd[1]: Stopped kubelet.service. Nov 1 00:20:29.993250 systemd[1]: Starting kubelet.service... Nov 1 00:20:30.162620 systemd[1]: Started kubelet.service. Nov 1 00:20:30.210090 kubelet[1616]: E1101 00:20:30.210035 1616 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:20:30.212866 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:20:30.212985 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:20:35.049009 waagent[1585]: 2025-11-01T00:20:35.048890Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Nov 1 00:20:35.074212 waagent[1585]: 2025-11-01T00:20:35.074116Z INFO Daemon Daemon OS: flatcar 3510.3.8 Nov 1 00:20:35.079214 waagent[1585]: 2025-11-01T00:20:35.079138Z INFO Daemon Daemon Python: 3.9.16 Nov 1 00:20:35.084481 waagent[1585]: 2025-11-01T00:20:35.084095Z INFO Daemon Daemon Run daemon Nov 1 00:20:35.089363 waagent[1585]: 2025-11-01T00:20:35.089294Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Nov 1 00:20:35.124611 waagent[1585]: 2025-11-01T00:20:35.124447Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Nov 1 00:20:35.140805 waagent[1585]: 2025-11-01T00:20:35.140618Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 1 00:20:35.151733 waagent[1585]: 2025-11-01T00:20:35.151631Z INFO Daemon Daemon cloud-init is enabled: False Nov 1 00:20:35.157387 waagent[1585]: 2025-11-01T00:20:35.157307Z INFO Daemon Daemon Using waagent for provisioning Nov 1 00:20:35.164197 waagent[1585]: 2025-11-01T00:20:35.164122Z INFO Daemon Daemon Activate resource disk Nov 1 00:20:35.170260 waagent[1585]: 2025-11-01T00:20:35.170182Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 1 00:20:35.186408 waagent[1585]: 2025-11-01T00:20:35.186318Z INFO Daemon Daemon Found device: None Nov 1 00:20:35.191288 waagent[1585]: 2025-11-01T00:20:35.191210Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 1 00:20:35.201041 waagent[1585]: 2025-11-01T00:20:35.200957Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 1 00:20:35.213406 waagent[1585]: 2025-11-01T00:20:35.213332Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 1 00:20:35.219471 waagent[1585]: 2025-11-01T00:20:35.219393Z INFO Daemon Daemon Running default provisioning handler Nov 1 00:20:35.233561 waagent[1585]: 2025-11-01T00:20:35.233383Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Nov 1 00:20:35.248762 waagent[1585]: 2025-11-01T00:20:35.248594Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 1 00:20:35.259036 waagent[1585]: 2025-11-01T00:20:35.258945Z INFO Daemon Daemon cloud-init is enabled: False Nov 1 00:20:35.264748 waagent[1585]: 2025-11-01T00:20:35.264655Z INFO Daemon Daemon Copying ovf-env.xml Nov 1 00:20:35.432056 waagent[1585]: 2025-11-01T00:20:35.430445Z INFO Daemon Daemon Successfully mounted dvd Nov 1 00:20:35.547153 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 1 00:20:35.600328 waagent[1585]: 2025-11-01T00:20:35.600168Z INFO Daemon Daemon Detect protocol endpoint Nov 1 00:20:35.605477 waagent[1585]: 2025-11-01T00:20:35.605387Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 1 00:20:35.611980 waagent[1585]: 2025-11-01T00:20:35.611903Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 1 00:20:35.618923 waagent[1585]: 2025-11-01T00:20:35.618853Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 1 00:20:35.624932 waagent[1585]: 2025-11-01T00:20:35.624860Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 1 00:20:35.630584 waagent[1585]: 2025-11-01T00:20:35.630509Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 1 00:20:35.936383 waagent[1585]: 2025-11-01T00:20:35.936313Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 1 00:20:35.944075 waagent[1585]: 2025-11-01T00:20:35.944024Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 1 00:20:35.949821 waagent[1585]: 2025-11-01T00:20:35.949736Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 1 00:20:36.712071 waagent[1585]: 2025-11-01T00:20:36.711919Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 1 00:20:36.760329 waagent[1585]: 2025-11-01T00:20:36.760238Z INFO Daemon Daemon Forcing an update of the goal state.. Nov 1 00:20:36.766567 waagent[1585]: 2025-11-01T00:20:36.766476Z INFO Daemon Daemon Fetching goal state [incarnation 1] Nov 1 00:20:36.918299 waagent[1585]: 2025-11-01T00:20:36.918160Z INFO Daemon Daemon Found private key matching thumbprint C04D6ED87197A28A0E26AD92C9D10D0E177ED86D Nov 1 00:20:36.927189 waagent[1585]: 2025-11-01T00:20:36.927092Z INFO Daemon Daemon Fetch goal state completed Nov 1 00:20:36.974199 waagent[1585]: 2025-11-01T00:20:36.974090Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 018f861b-9c8c-4362-a47f-7647e807f4a9 New eTag: 11212807665685450739] Nov 1 00:20:36.985470 waagent[1585]: 2025-11-01T00:20:36.985378Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Nov 1 00:20:37.033314 waagent[1585]: 2025-11-01T00:20:37.033247Z INFO Daemon Daemon Starting provisioning Nov 1 00:20:37.038839 waagent[1585]: 2025-11-01T00:20:37.038741Z INFO Daemon Daemon Handle ovf-env.xml. Nov 1 00:20:37.043748 waagent[1585]: 2025-11-01T00:20:37.043642Z INFO Daemon Daemon Set hostname [ci-3510.3.8-n-ec0975c3e1] Nov 1 00:20:37.112852 waagent[1585]: 2025-11-01T00:20:37.112716Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-n-ec0975c3e1] Nov 1 00:20:37.119604 waagent[1585]: 2025-11-01T00:20:37.119508Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 1 00:20:37.126355 waagent[1585]: 2025-11-01T00:20:37.126272Z INFO Daemon Daemon Primary interface is [eth0] Nov 1 00:20:37.142983 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Nov 1 00:20:37.143162 systemd[1]: Stopped systemd-networkd-wait-online.service. Nov 1 00:20:37.143223 systemd[1]: Stopping systemd-networkd-wait-online.service... Nov 1 00:20:37.143464 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:20:37.152728 systemd-networkd[1250]: eth0: DHCPv6 lease lost Nov 1 00:20:37.154110 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:20:37.154290 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:20:37.156245 systemd[1]: Starting systemd-networkd.service... Nov 1 00:20:37.185212 systemd-networkd[1648]: enP23400s1: Link UP Nov 1 00:20:37.185489 systemd-networkd[1648]: enP23400s1: Gained carrier Nov 1 00:20:37.186729 systemd-networkd[1648]: eth0: Link UP Nov 1 00:20:37.186834 systemd-networkd[1648]: eth0: Gained carrier Nov 1 00:20:37.187259 systemd-networkd[1648]: lo: Link UP Nov 1 00:20:37.187323 systemd-networkd[1648]: lo: Gained carrier Nov 1 00:20:37.187639 systemd-networkd[1648]: eth0: Gained IPv6LL Nov 1 00:20:37.188891 systemd-networkd[1648]: Enumeration completed Nov 1 00:20:37.189092 systemd[1]: Started systemd-networkd.service. Nov 1 00:20:37.190828 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:20:37.191136 systemd-networkd[1648]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:20:37.195445 waagent[1585]: 2025-11-01T00:20:37.195030Z INFO Daemon Daemon Create user account if not exists Nov 1 00:20:37.202470 waagent[1585]: 2025-11-01T00:20:37.202371Z INFO Daemon Daemon User core already exists, skip useradd Nov 1 00:20:37.210813 waagent[1585]: 2025-11-01T00:20:37.210668Z INFO Daemon Daemon Configure sudoer Nov 1 00:20:37.218766 systemd-networkd[1648]: eth0: DHCPv4 address 10.200.20.48/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 1 00:20:37.222626 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:20:37.230298 waagent[1585]: 2025-11-01T00:20:37.230150Z INFO Daemon Daemon Configure sshd Nov 1 00:20:37.235685 waagent[1585]: 2025-11-01T00:20:37.235556Z INFO Daemon Daemon Deploy ssh public key. Nov 1 00:20:38.467305 waagent[1585]: 2025-11-01T00:20:38.467231Z INFO Daemon Daemon Provisioning complete Nov 1 00:20:38.487444 waagent[1585]: 2025-11-01T00:20:38.487379Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 1 00:20:38.494938 waagent[1585]: 2025-11-01T00:20:38.494850Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 1 00:20:38.506524 waagent[1585]: 2025-11-01T00:20:38.506443Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Nov 1 00:20:38.814498 waagent[1654]: 2025-11-01T00:20:38.814398Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Nov 1 00:20:38.815642 waagent[1654]: 2025-11-01T00:20:38.815579Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:20:38.815908 waagent[1654]: 2025-11-01T00:20:38.815859Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:20:38.828948 waagent[1654]: 2025-11-01T00:20:38.828851Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Nov 1 00:20:38.829271 waagent[1654]: 2025-11-01T00:20:38.829223Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Nov 1 00:20:38.892910 waagent[1654]: 2025-11-01T00:20:38.892757Z INFO ExtHandler ExtHandler Found private key matching thumbprint C04D6ED87197A28A0E26AD92C9D10D0E177ED86D Nov 1 00:20:38.893385 waagent[1654]: 2025-11-01T00:20:38.893333Z INFO ExtHandler ExtHandler Fetch goal state completed Nov 1 00:20:38.909124 waagent[1654]: 2025-11-01T00:20:38.909065Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 955d8149-d5d5-40fc-ba24-b8cac516afdb New eTag: 11212807665685450739] Nov 1 00:20:38.909899 waagent[1654]: 2025-11-01T00:20:38.909837Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Nov 1 00:20:39.047790 waagent[1654]: 2025-11-01T00:20:39.047623Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 1 00:20:39.074789 waagent[1654]: 2025-11-01T00:20:39.074634Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1654 Nov 1 00:20:39.078781 waagent[1654]: 2025-11-01T00:20:39.078674Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Nov 1 00:20:39.080259 waagent[1654]: 2025-11-01T00:20:39.080186Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 1 00:20:39.245264 waagent[1654]: 2025-11-01T00:20:39.245206Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 1 00:20:39.245894 waagent[1654]: 2025-11-01T00:20:39.245835Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 1 00:20:39.254020 waagent[1654]: 2025-11-01T00:20:39.253962Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 1 00:20:39.254724 waagent[1654]: 2025-11-01T00:20:39.254649Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Nov 1 00:20:39.256066 waagent[1654]: 2025-11-01T00:20:39.256004Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Nov 1 00:20:39.257572 waagent[1654]: 2025-11-01T00:20:39.257501Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 1 00:20:39.257944 waagent[1654]: 2025-11-01T00:20:39.257871Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:20:39.258422 waagent[1654]: 2025-11-01T00:20:39.258356Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:20:39.259047 waagent[1654]: 2025-11-01T00:20:39.258981Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 1 00:20:39.259378 waagent[1654]: 2025-11-01T00:20:39.259320Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 1 00:20:39.259378 waagent[1654]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 1 00:20:39.259378 waagent[1654]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Nov 1 00:20:39.259378 waagent[1654]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 1 00:20:39.259378 waagent[1654]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:20:39.259378 waagent[1654]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:20:39.259378 waagent[1654]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:20:39.261719 waagent[1654]: 2025-11-01T00:20:39.261519Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 1 00:20:39.262560 waagent[1654]: 2025-11-01T00:20:39.262484Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:20:39.262786 waagent[1654]: 2025-11-01T00:20:39.262725Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:20:39.263391 waagent[1654]: 2025-11-01T00:20:39.263328Z INFO EnvHandler ExtHandler Configure routes Nov 1 00:20:39.263543 waagent[1654]: 2025-11-01T00:20:39.263497Z INFO EnvHandler ExtHandler Gateway:None Nov 1 00:20:39.263657 waagent[1654]: 2025-11-01T00:20:39.263615Z INFO EnvHandler ExtHandler Routes:None Nov 1 00:20:39.264554 waagent[1654]: 2025-11-01T00:20:39.264496Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 1 00:20:39.264741 waagent[1654]: 2025-11-01T00:20:39.264653Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 1 00:20:39.265670 waagent[1654]: 2025-11-01T00:20:39.265577Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 1 00:20:39.265881 waagent[1654]: 2025-11-01T00:20:39.265810Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 1 00:20:39.266171 waagent[1654]: 2025-11-01T00:20:39.266109Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 1 00:20:39.277638 waagent[1654]: 2025-11-01T00:20:39.277545Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Nov 1 00:20:39.278567 waagent[1654]: 2025-11-01T00:20:39.278518Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Nov 1 00:20:39.279605 waagent[1654]: 2025-11-01T00:20:39.279552Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Nov 1 00:20:39.326791 waagent[1654]: 2025-11-01T00:20:39.326653Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Nov 1 00:20:39.346172 waagent[1654]: 2025-11-01T00:20:39.346037Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1648' Nov 1 00:20:39.473523 waagent[1654]: 2025-11-01T00:20:39.473369Z INFO MonitorHandler ExtHandler Network interfaces: Nov 1 00:20:39.473523 waagent[1654]: Executing ['ip', '-a', '-o', 'link']: Nov 1 00:20:39.473523 waagent[1654]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 1 00:20:39.473523 waagent[1654]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b6:53:50 brd ff:ff:ff:ff:ff:ff Nov 1 00:20:39.473523 waagent[1654]: 3: enP23400s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b6:53:50 brd ff:ff:ff:ff:ff:ff\ altname enP23400p0s2 Nov 1 00:20:39.473523 waagent[1654]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 1 00:20:39.473523 waagent[1654]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 1 00:20:39.473523 waagent[1654]: 2: eth0 inet 10.200.20.48/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 1 00:20:39.473523 waagent[1654]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 1 00:20:39.473523 waagent[1654]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Nov 1 00:20:39.473523 waagent[1654]: 2: eth0 inet6 fe80::222:48ff:feb6:5350/64 scope link \ valid_lft forever preferred_lft forever Nov 1 00:20:39.807813 waagent[1654]: 2025-11-01T00:20:39.807748Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.15.0.1 -- exiting Nov 1 00:20:40.241608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:20:40.241791 systemd[1]: Stopped kubelet.service. Nov 1 00:20:40.243177 systemd[1]: Starting kubelet.service... Nov 1 00:20:40.511772 waagent[1585]: 2025-11-01T00:20:40.511014Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Nov 1 00:20:40.517998 waagent[1585]: 2025-11-01T00:20:40.517931Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.15.0.1 to be the latest agent Nov 1 00:20:40.570606 systemd[1]: Started kubelet.service. Nov 1 00:20:40.605136 kubelet[1688]: E1101 00:20:40.605082 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:20:40.607364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:20:40.607486 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:20:42.107336 waagent[1685]: 2025-11-01T00:20:42.107231Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.15.0.1) Nov 1 00:20:42.108106 waagent[1685]: 2025-11-01T00:20:42.108041Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Nov 1 00:20:42.108255 waagent[1685]: 2025-11-01T00:20:42.108207Z INFO ExtHandler ExtHandler Python: 3.9.16 Nov 1 00:20:42.108393 waagent[1685]: 2025-11-01T00:20:42.108349Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Nov 1 00:20:42.122904 waagent[1685]: 2025-11-01T00:20:42.122769Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 1 00:20:42.123355 waagent[1685]: 2025-11-01T00:20:42.123296Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:20:42.123518 waagent[1685]: 2025-11-01T00:20:42.123473Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:20:42.123754 waagent[1685]: 2025-11-01T00:20:42.123705Z INFO ExtHandler ExtHandler Initializing the goal state... Nov 1 00:20:42.138176 waagent[1685]: 2025-11-01T00:20:42.138089Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 1 00:20:42.152066 waagent[1685]: 2025-11-01T00:20:42.152008Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 1 00:20:42.153223 waagent[1685]: 2025-11-01T00:20:42.153166Z INFO ExtHandler Nov 1 00:20:42.153392 waagent[1685]: 2025-11-01T00:20:42.153344Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 68f6a548-22a2-46d3-a44b-36502d628770 eTag: 11212807665685450739 source: Fabric] Nov 1 00:20:42.154183 waagent[1685]: 2025-11-01T00:20:42.154127Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 1 00:20:42.155451 waagent[1685]: 2025-11-01T00:20:42.155391Z INFO ExtHandler Nov 1 00:20:42.155600 waagent[1685]: 2025-11-01T00:20:42.155555Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 1 00:20:42.166023 waagent[1685]: 2025-11-01T00:20:42.165965Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 1 00:20:42.166626 waagent[1685]: 2025-11-01T00:20:42.166576Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Nov 1 00:20:42.186972 waagent[1685]: 2025-11-01T00:20:42.186909Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Nov 1 00:20:42.252944 waagent[1685]: 2025-11-01T00:20:42.252807Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C04D6ED87197A28A0E26AD92C9D10D0E177ED86D', 'hasPrivateKey': True} Nov 1 00:20:42.254395 waagent[1685]: 2025-11-01T00:20:42.254324Z INFO ExtHandler Fetch goal state from WireServer completed Nov 1 00:20:42.255374 waagent[1685]: 2025-11-01T00:20:42.255312Z INFO ExtHandler ExtHandler Goal state initialization completed. Nov 1 00:20:42.276385 waagent[1685]: 2025-11-01T00:20:42.276247Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Nov 1 00:20:42.285172 waagent[1685]: 2025-11-01T00:20:42.285045Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Nov 1 00:20:42.289106 waagent[1685]: 2025-11-01T00:20:42.288980Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Nov 1 00:20:42.289342 waagent[1685]: 2025-11-01T00:20:42.289290Z INFO ExtHandler ExtHandler Checking state of the firewall Nov 1 00:20:42.501236 waagent[1685]: 2025-11-01T00:20:42.501096Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: Nov 1 00:20:42.501236 waagent[1685]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:20:42.501236 waagent[1685]: pkts bytes target prot opt in out source destination Nov 1 00:20:42.501236 waagent[1685]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:20:42.501236 waagent[1685]: pkts bytes target prot opt in out source destination Nov 1 00:20:42.501236 waagent[1685]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:20:42.501236 waagent[1685]: pkts bytes target prot opt in out source destination Nov 1 00:20:42.501236 waagent[1685]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 1 00:20:42.501236 waagent[1685]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 1 00:20:42.501236 waagent[1685]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 1 00:20:42.502419 waagent[1685]: 2025-11-01T00:20:42.502353Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Nov 1 00:20:42.505572 waagent[1685]: 2025-11-01T00:20:42.505435Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Nov 1 00:20:42.506091 waagent[1685]: 2025-11-01T00:20:42.506031Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up /lib/systemd/system/waagent-network-setup.service Nov 1 00:20:42.506480 waagent[1685]: 2025-11-01T00:20:42.506424Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 1 00:20:42.514818 waagent[1685]: 2025-11-01T00:20:42.514741Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 1 00:20:42.515418 waagent[1685]: 2025-11-01T00:20:42.515358Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Nov 1 00:20:42.524005 waagent[1685]: 2025-11-01T00:20:42.523925Z INFO ExtHandler ExtHandler WALinuxAgent-2.15.0.1 running as process 1685 Nov 1 00:20:42.527476 waagent[1685]: 2025-11-01T00:20:42.527397Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Nov 1 00:20:42.528400 waagent[1685]: 2025-11-01T00:20:42.528339Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Nov 1 00:20:42.529350 waagent[1685]: 2025-11-01T00:20:42.529292Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Nov 1 00:20:42.532237 waagent[1685]: 2025-11-01T00:20:42.532176Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Nov 1 00:20:42.532596 waagent[1685]: 2025-11-01T00:20:42.532544Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Nov 1 00:20:42.534461 waagent[1685]: 2025-11-01T00:20:42.534389Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 1 00:20:42.535194 waagent[1685]: 2025-11-01T00:20:42.535134Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:20:42.535469 waagent[1685]: 2025-11-01T00:20:42.535420Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:20:42.536243 waagent[1685]: 2025-11-01T00:20:42.536187Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 1 00:20:42.536734 waagent[1685]: 2025-11-01T00:20:42.536653Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 1 00:20:42.536734 waagent[1685]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 1 00:20:42.536734 waagent[1685]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Nov 1 00:20:42.536734 waagent[1685]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 1 00:20:42.536734 waagent[1685]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:20:42.536734 waagent[1685]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:20:42.536734 waagent[1685]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:20:42.539516 waagent[1685]: 2025-11-01T00:20:42.539400Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 1 00:20:42.542561 waagent[1685]: 2025-11-01T00:20:42.542385Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:20:42.543222 waagent[1685]: 2025-11-01T00:20:42.543166Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:20:42.543453 waagent[1685]: 2025-11-01T00:20:42.543387Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 1 00:20:42.543588 waagent[1685]: 2025-11-01T00:20:42.543534Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 1 00:20:42.547002 waagent[1685]: 2025-11-01T00:20:42.546814Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 1 00:20:42.547525 waagent[1685]: 2025-11-01T00:20:42.547465Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 1 00:20:42.548391 waagent[1685]: 2025-11-01T00:20:42.548323Z INFO EnvHandler ExtHandler Configure routes Nov 1 00:20:42.550001 waagent[1685]: 2025-11-01T00:20:42.549923Z INFO MonitorHandler ExtHandler Network interfaces: Nov 1 00:20:42.550001 waagent[1685]: Executing ['ip', '-a', '-o', 'link']: Nov 1 00:20:42.550001 waagent[1685]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 1 00:20:42.550001 waagent[1685]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b6:53:50 brd ff:ff:ff:ff:ff:ff Nov 1 00:20:42.550001 waagent[1685]: 3: enP23400s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b6:53:50 brd ff:ff:ff:ff:ff:ff\ altname enP23400p0s2 Nov 1 00:20:42.550001 waagent[1685]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 1 00:20:42.550001 waagent[1685]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 1 00:20:42.550001 waagent[1685]: 2: eth0 inet 10.200.20.48/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 1 00:20:42.550001 waagent[1685]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 1 00:20:42.550001 waagent[1685]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Nov 1 00:20:42.550001 waagent[1685]: 2: eth0 inet6 fe80::222:48ff:feb6:5350/64 scope link \ valid_lft forever preferred_lft forever Nov 1 00:20:42.550702 waagent[1685]: 2025-11-01T00:20:42.550620Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 1 00:20:42.551024 waagent[1685]: 2025-11-01T00:20:42.550961Z INFO EnvHandler ExtHandler Gateway:None Nov 1 00:20:42.551423 waagent[1685]: 2025-11-01T00:20:42.551362Z INFO EnvHandler ExtHandler Routes:None Nov 1 00:20:42.563529 waagent[1685]: 2025-11-01T00:20:42.563414Z INFO ExtHandler ExtHandler Downloading agent manifest Nov 1 00:20:42.583169 waagent[1685]: 2025-11-01T00:20:42.583087Z INFO ExtHandler ExtHandler Nov 1 00:20:42.583725 waagent[1685]: 2025-11-01T00:20:42.583641Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: dd62dfdc-bea5-488a-a05a-3cdc01ae6826 correlation 76e55102-2eb1-4dae-b89f-a36888376b61 created: 2025-11-01T00:18:35.989365Z] Nov 1 00:20:42.587298 waagent[1685]: 2025-11-01T00:20:42.587223Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 1 00:20:42.593950 waagent[1685]: 2025-11-01T00:20:42.593872Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 10 ms] Nov 1 00:20:42.616880 waagent[1685]: 2025-11-01T00:20:42.616756Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Nov 1 00:20:42.619407 waagent[1685]: 2025-11-01T00:20:42.619322Z INFO ExtHandler ExtHandler Looking for existing remote access users. Nov 1 00:20:42.624840 waagent[1685]: 2025-11-01T00:20:42.624678Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.15.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0D0798B6-6ED2-4ED5-A4F8-E4DA3DDB5CC2;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Nov 1 00:20:42.634402 waagent[1685]: 2025-11-01T00:20:42.634322Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 1 00:20:50.741610 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 00:20:50.741812 systemd[1]: Stopped kubelet.service. Nov 1 00:20:50.743182 systemd[1]: Starting kubelet.service... Nov 1 00:20:50.930294 systemd[1]: Started kubelet.service. Nov 1 00:20:50.966309 kubelet[1737]: E1101 00:20:50.966242 1737 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:20:50.968497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:20:50.968616 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:20:51.595492 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Nov 1 00:20:52.748700 systemd[1]: Created slice system-sshd.slice. Nov 1 00:20:52.750317 systemd[1]: Started sshd@0-10.200.20.48:22-10.200.16.10:46390.service. Nov 1 00:20:53.484586 sshd[1743]: Accepted publickey for core from 10.200.16.10 port 46390 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:20:53.504391 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:20:53.507740 systemd-logind[1467]: New session 3 of user core. Nov 1 00:20:53.508523 systemd[1]: Started session-3.scope. Nov 1 00:20:53.862049 systemd[1]: Started sshd@1-10.200.20.48:22-10.200.16.10:46406.service. Nov 1 00:20:54.278353 sshd[1748]: Accepted publickey for core from 10.200.16.10 port 46406 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:20:54.279974 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:20:54.284151 systemd[1]: Started session-4.scope. Nov 1 00:20:54.284592 systemd-logind[1467]: New session 4 of user core. Nov 1 00:20:54.586721 sshd[1748]: pam_unix(sshd:session): session closed for user core Nov 1 00:20:54.589310 systemd[1]: sshd@1-10.200.20.48:22-10.200.16.10:46406.service: Deactivated successfully. Nov 1 00:20:54.590049 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:20:54.590579 systemd-logind[1467]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:20:54.591365 systemd-logind[1467]: Removed session 4. Nov 1 00:20:54.656291 systemd[1]: Started sshd@2-10.200.20.48:22-10.200.16.10:46422.service. Nov 1 00:20:55.078574 sshd[1754]: Accepted publickey for core from 10.200.16.10 port 46422 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:20:55.079854 sshd[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:20:55.084105 systemd[1]: Started session-5.scope. Nov 1 00:20:55.084416 systemd-logind[1467]: New session 5 of user core. Nov 1 00:20:55.394498 sshd[1754]: pam_unix(sshd:session): session closed for user core Nov 1 00:20:55.397131 systemd[1]: sshd@2-10.200.20.48:22-10.200.16.10:46422.service: Deactivated successfully. Nov 1 00:20:55.397809 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:20:55.398322 systemd-logind[1467]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:20:55.399150 systemd-logind[1467]: Removed session 5. Nov 1 00:20:55.471005 systemd[1]: Started sshd@3-10.200.20.48:22-10.200.16.10:46434.service. Nov 1 00:20:55.898316 sshd[1760]: Accepted publickey for core from 10.200.16.10 port 46434 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:20:55.902350 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:20:55.906274 systemd-logind[1467]: New session 6 of user core. Nov 1 00:20:55.906715 systemd[1]: Started session-6.scope. Nov 1 00:20:56.231526 sshd[1760]: pam_unix(sshd:session): session closed for user core Nov 1 00:20:56.234166 systemd[1]: sshd@3-10.200.20.48:22-10.200.16.10:46434.service: Deactivated successfully. Nov 1 00:20:56.234863 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:20:56.235382 systemd-logind[1467]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:20:56.236240 systemd-logind[1467]: Removed session 6. Nov 1 00:20:56.299791 systemd[1]: Started sshd@4-10.200.20.48:22-10.200.16.10:46444.service. Nov 1 00:20:56.722020 sshd[1766]: Accepted publickey for core from 10.200.16.10 port 46444 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:20:56.723277 sshd[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:20:56.726735 systemd-logind[1467]: New session 7 of user core. Nov 1 00:20:56.727524 systemd[1]: Started session-7.scope. Nov 1 00:20:57.354930 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:20:57.355152 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:20:57.394032 systemd[1]: Starting docker.service... Nov 1 00:20:57.448515 env[1780]: time="2025-11-01T00:20:57.448463493Z" level=info msg="Starting up" Nov 1 00:20:57.450134 env[1780]: time="2025-11-01T00:20:57.450101478Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:20:57.450134 env[1780]: time="2025-11-01T00:20:57.450127637Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:20:57.450268 env[1780]: time="2025-11-01T00:20:57.450147837Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:20:57.450268 env[1780]: time="2025-11-01T00:20:57.450158757Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:20:57.452045 env[1780]: time="2025-11-01T00:20:57.452012180Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:20:57.452045 env[1780]: time="2025-11-01T00:20:57.452037300Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:20:57.452154 env[1780]: time="2025-11-01T00:20:57.452053579Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:20:57.452154 env[1780]: time="2025-11-01T00:20:57.452063179Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:20:57.457440 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1913055812-merged.mount: Deactivated successfully. Nov 1 00:20:57.548638 env[1780]: time="2025-11-01T00:20:57.548603044Z" level=info msg="Loading containers: start." Nov 1 00:20:57.830708 kernel: Initializing XFRM netlink socket Nov 1 00:20:57.882574 env[1780]: time="2025-11-01T00:20:57.882541026Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:20:58.121145 systemd-networkd[1648]: docker0: Link UP Nov 1 00:20:58.153900 env[1780]: time="2025-11-01T00:20:58.153856117Z" level=info msg="Loading containers: done." Nov 1 00:20:58.162809 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1053025645-merged.mount: Deactivated successfully. Nov 1 00:20:58.177231 env[1780]: time="2025-11-01T00:20:58.177184834Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:20:58.177429 env[1780]: time="2025-11-01T00:20:58.177405352Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:20:58.177544 env[1780]: time="2025-11-01T00:20:58.177522631Z" level=info msg="Daemon has completed initialization" Nov 1 00:20:58.216020 systemd[1]: Started docker.service. Nov 1 00:20:58.225785 env[1780]: time="2025-11-01T00:20:58.225720412Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:21:00.991645 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 1 00:21:00.991845 systemd[1]: Stopped kubelet.service. Nov 1 00:21:00.993245 systemd[1]: Starting kubelet.service... Nov 1 00:21:01.499515 systemd[1]: Started kubelet.service. Nov 1 00:21:01.538229 kubelet[1898]: E1101 00:21:01.538173 1898 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:01.540656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:01.540803 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:02.530622 env[1490]: time="2025-11-01T00:21:02.530579500Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 1 00:21:03.507089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4029815243.mount: Deactivated successfully. Nov 1 00:21:04.482718 update_engine[1468]: I1101 00:21:04.482600 1468 update_attempter.cc:509] Updating boot flags... Nov 1 00:21:05.151552 env[1490]: time="2025-11-01T00:21:05.151505543Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:05.159439 env[1490]: time="2025-11-01T00:21:05.159398979Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:05.165723 env[1490]: time="2025-11-01T00:21:05.165676664Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:05.171321 env[1490]: time="2025-11-01T00:21:05.171273513Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:05.172188 env[1490]: time="2025-11-01T00:21:05.172157308Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Nov 1 00:21:05.172749 env[1490]: time="2025-11-01T00:21:05.172724785Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 1 00:21:06.781293 env[1490]: time="2025-11-01T00:21:06.781224390Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:06.791751 env[1490]: time="2025-11-01T00:21:06.791704496Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:06.797244 env[1490]: time="2025-11-01T00:21:06.797204627Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:06.802411 env[1490]: time="2025-11-01T00:21:06.802371361Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:06.803222 env[1490]: time="2025-11-01T00:21:06.803190996Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Nov 1 00:21:06.803783 env[1490]: time="2025-11-01T00:21:06.803761993Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 1 00:21:08.023528 env[1490]: time="2025-11-01T00:21:08.023475282Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:08.033705 env[1490]: time="2025-11-01T00:21:08.033659275Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:08.038629 env[1490]: time="2025-11-01T00:21:08.038581493Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:08.043677 env[1490]: time="2025-11-01T00:21:08.043625230Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:08.044339 env[1490]: time="2025-11-01T00:21:08.044305027Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Nov 1 00:21:08.045563 env[1490]: time="2025-11-01T00:21:08.045532821Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 1 00:21:09.288960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2877369347.mount: Deactivated successfully. Nov 1 00:21:09.688227 env[1490]: time="2025-11-01T00:21:09.688117244Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:09.695737 env[1490]: time="2025-11-01T00:21:09.695668092Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:09.700012 env[1490]: time="2025-11-01T00:21:09.699971194Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:09.703943 env[1490]: time="2025-11-01T00:21:09.703903137Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:09.704699 env[1490]: time="2025-11-01T00:21:09.704660574Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Nov 1 00:21:09.705710 env[1490]: time="2025-11-01T00:21:09.705668169Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 1 00:21:10.399716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2361594272.mount: Deactivated successfully. Nov 1 00:21:11.741514 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 1 00:21:11.741682 systemd[1]: Stopped kubelet.service. Nov 1 00:21:11.743055 systemd[1]: Starting kubelet.service... Nov 1 00:21:11.993334 systemd[1]: Started kubelet.service. Nov 1 00:21:12.028512 kubelet[1948]: E1101 00:21:12.028458 1948 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:12.030274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:12.030390 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:12.638675 env[1490]: time="2025-11-01T00:21:12.638602772Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:12.647757 env[1490]: time="2025-11-01T00:21:12.647664460Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:12.653555 env[1490]: time="2025-11-01T00:21:12.653500040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:12.658156 env[1490]: time="2025-11-01T00:21:12.658121543Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:12.658998 env[1490]: time="2025-11-01T00:21:12.658969140Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Nov 1 00:21:12.659552 env[1490]: time="2025-11-01T00:21:12.659528138Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 1 00:21:13.309037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount518199367.mount: Deactivated successfully. Nov 1 00:21:13.335550 env[1490]: time="2025-11-01T00:21:13.335506910Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:13.344082 env[1490]: time="2025-11-01T00:21:13.344042802Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:13.351137 env[1490]: time="2025-11-01T00:21:13.351086899Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:13.355913 env[1490]: time="2025-11-01T00:21:13.355871483Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:13.356148 env[1490]: time="2025-11-01T00:21:13.356122682Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Nov 1 00:21:13.357100 env[1490]: time="2025-11-01T00:21:13.357065119Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 1 00:21:19.474427 env[1490]: time="2025-11-01T00:21:19.474365019Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.6.4-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:19.520057 env[1490]: time="2025-11-01T00:21:19.520012459Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:19.566654 env[1490]: time="2025-11-01T00:21:19.566598542Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.6.4-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:19.572982 env[1490]: time="2025-11-01T00:21:19.572917710Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:19.574040 env[1490]: time="2025-11-01T00:21:19.574006133Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Nov 1 00:21:22.241513 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 1 00:21:22.241709 systemd[1]: Stopped kubelet.service. Nov 1 00:21:22.243074 systemd[1]: Starting kubelet.service... Nov 1 00:21:22.683465 systemd[1]: Started kubelet.service. Nov 1 00:21:22.748516 kubelet[1974]: E1101 00:21:22.748474 1974 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:22.750467 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:22.750601 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:26.580715 systemd[1]: Stopped kubelet.service. Nov 1 00:21:26.583095 systemd[1]: Starting kubelet.service... Nov 1 00:21:26.612400 systemd[1]: Reloading. Nov 1 00:21:26.692086 /usr/lib/systemd/system-generators/torcx-generator[2006]: time="2025-11-01T00:21:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:21:26.692120 /usr/lib/systemd/system-generators/torcx-generator[2006]: time="2025-11-01T00:21:26Z" level=info msg="torcx already run" Nov 1 00:21:26.770898 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:21:26.771058 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:21:26.786887 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:21:26.878945 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:21:26.879015 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:21:26.879323 systemd[1]: Stopped kubelet.service. Nov 1 00:21:26.881426 systemd[1]: Starting kubelet.service... Nov 1 00:21:34.727978 systemd[1]: Started kubelet.service. Nov 1 00:21:34.772836 kubelet[2072]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:21:34.772836 kubelet[2072]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:21:34.774142 kubelet[2072]: I1101 00:21:34.774096 2072 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:21:35.566430 kubelet[2072]: I1101 00:21:35.566391 2072 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:21:35.566430 kubelet[2072]: I1101 00:21:35.566420 2072 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:21:35.566620 kubelet[2072]: I1101 00:21:35.566445 2072 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:21:35.566620 kubelet[2072]: I1101 00:21:35.566451 2072 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:21:35.566761 kubelet[2072]: I1101 00:21:35.566743 2072 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:21:35.579702 kubelet[2072]: I1101 00:21:35.579662 2072 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:21:35.582720 kubelet[2072]: E1101 00:21:35.582667 2072 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:21:35.583543 kubelet[2072]: E1101 00:21:35.583512 2072 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:21:35.583663 kubelet[2072]: I1101 00:21:35.583650 2072 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:21:35.586373 kubelet[2072]: I1101 00:21:35.586348 2072 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:21:35.586707 kubelet[2072]: I1101 00:21:35.586669 2072 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:21:35.586920 kubelet[2072]: I1101 00:21:35.586780 2072 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-ec0975c3e1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:21:35.587050 kubelet[2072]: I1101 00:21:35.587037 2072 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:21:35.587107 kubelet[2072]: I1101 00:21:35.587099 2072 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:21:35.587256 kubelet[2072]: I1101 00:21:35.587244 2072 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:21:35.594239 kubelet[2072]: I1101 00:21:35.594213 2072 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:21:35.595651 kubelet[2072]: I1101 00:21:35.595629 2072 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:21:35.595790 kubelet[2072]: I1101 00:21:35.595777 2072 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:21:35.595874 kubelet[2072]: I1101 00:21:35.595864 2072 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:21:35.595932 kubelet[2072]: I1101 00:21:35.595923 2072 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:21:35.596309 kubelet[2072]: E1101 00:21:35.596249 2072 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-ec0975c3e1&limit=500&resourceVersion=0\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:21:35.596871 kubelet[2072]: E1101 00:21:35.596847 2072 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:21:35.597084 kubelet[2072]: I1101 00:21:35.597070 2072 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:21:35.597727 kubelet[2072]: I1101 00:21:35.597681 2072 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:21:35.597834 kubelet[2072]: I1101 00:21:35.597823 2072 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:21:35.597913 kubelet[2072]: W1101 00:21:35.597903 2072 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:21:35.601898 kubelet[2072]: I1101 00:21:35.601866 2072 server.go:1262] "Started kubelet" Nov 1 00:21:35.603267 kubelet[2072]: I1101 00:21:35.603239 2072 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:21:35.604182 kubelet[2072]: I1101 00:21:35.604164 2072 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:21:35.605090 kubelet[2072]: I1101 00:21:35.605033 2072 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:21:35.605167 kubelet[2072]: I1101 00:21:35.605103 2072 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:21:35.605644 kubelet[2072]: I1101 00:21:35.605614 2072 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:21:35.616468 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Nov 1 00:21:35.616780 kubelet[2072]: I1101 00:21:35.616758 2072 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:21:35.617638 kubelet[2072]: E1101 00:21:35.609749 2072 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.48:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.48:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-ec0975c3e1.1873ba18f60a33dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-ec0975c3e1,UID:ci-3510.3.8-n-ec0975c3e1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-ec0975c3e1,},FirstTimestamp:2025-11-01 00:21:35.601841117 +0000 UTC m=+0.868765235,LastTimestamp:2025-11-01 00:21:35.601841117 +0000 UTC m=+0.868765235,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-ec0975c3e1,}" Nov 1 00:21:35.619770 kubelet[2072]: I1101 00:21:35.619747 2072 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:21:35.620901 kubelet[2072]: I1101 00:21:35.620876 2072 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:21:35.621137 kubelet[2072]: E1101 00:21:35.621106 2072 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-ec0975c3e1\" not found" Nov 1 00:21:35.623200 kubelet[2072]: E1101 00:21:35.623164 2072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-ec0975c3e1?timeout=10s\": dial tcp 10.200.20.48:6443: connect: connection refused" interval="200ms" Nov 1 00:21:35.623334 kubelet[2072]: I1101 00:21:35.623322 2072 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:21:35.625068 kubelet[2072]: E1101 00:21:35.625039 2072 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:21:35.625584 kubelet[2072]: I1101 00:21:35.625565 2072 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:21:35.625807 kubelet[2072]: I1101 00:21:35.625791 2072 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:21:35.625887 kubelet[2072]: I1101 00:21:35.625877 2072 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:21:35.626001 kubelet[2072]: I1101 00:21:35.625986 2072 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:21:35.631217 kubelet[2072]: E1101 00:21:35.631194 2072 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:21:35.710732 kubelet[2072]: I1101 00:21:35.710674 2072 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:21:35.711824 kubelet[2072]: I1101 00:21:35.711799 2072 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:21:35.711824 kubelet[2072]: I1101 00:21:35.711824 2072 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:21:35.711943 kubelet[2072]: I1101 00:21:35.711863 2072 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:21:35.711943 kubelet[2072]: E1101 00:21:35.711903 2072 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:21:35.713304 kubelet[2072]: E1101 00:21:35.713264 2072 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:21:35.721949 kubelet[2072]: E1101 00:21:35.721915 2072 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-ec0975c3e1\" not found" Nov 1 00:21:35.813036 kubelet[2072]: E1101 00:21:35.813000 2072 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:21:35.822996 kubelet[2072]: E1101 00:21:35.822139 2072 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-ec0975c3e1\" not found" Nov 1 00:21:35.824465 kubelet[2072]: E1101 00:21:35.824433 2072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-ec0975c3e1?timeout=10s\": dial tcp 10.200.20.48:6443: connect: connection refused" interval="400ms" Nov 1 00:21:35.825437 kubelet[2072]: I1101 00:21:35.825423 2072 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:21:35.825557 kubelet[2072]: I1101 00:21:35.825544 2072 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:21:35.825629 kubelet[2072]: I1101 00:21:35.825621 2072 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:21:35.831804 kubelet[2072]: I1101 00:21:35.831780 2072 policy_none.go:49] "None policy: Start" Nov 1 00:21:35.831949 kubelet[2072]: I1101 00:21:35.831939 2072 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:21:35.832023 kubelet[2072]: I1101 00:21:35.832013 2072 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:21:35.838812 kubelet[2072]: I1101 00:21:35.838790 2072 policy_none.go:47] "Start" Nov 1 00:21:35.842307 systemd[1]: Created slice kubepods.slice. Nov 1 00:21:35.846959 systemd[1]: Created slice kubepods-besteffort.slice. Nov 1 00:21:35.859816 systemd[1]: Created slice kubepods-burstable.slice. Nov 1 00:21:35.861257 kubelet[2072]: E1101 00:21:35.861227 2072 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:21:35.861730 kubelet[2072]: I1101 00:21:35.861663 2072 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:21:35.861730 kubelet[2072]: I1101 00:21:35.861680 2072 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:21:35.862115 kubelet[2072]: I1101 00:21:35.862090 2072 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:21:35.863358 kubelet[2072]: E1101 00:21:35.863334 2072 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:21:35.863624 kubelet[2072]: E1101 00:21:35.863610 2072 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-ec0975c3e1\" not found" Nov 1 00:21:35.963127 kubelet[2072]: I1101 00:21:35.963083 2072 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:35.963571 kubelet[2072]: E1101 00:21:35.963549 2072 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.48:6443/api/v1/nodes\": dial tcp 10.200.20.48:6443: connect: connection refused" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:36.026268 systemd[1]: Created slice kubepods-burstable-pod80ac166ceeb064a078bb75921ed7e322.slice. Nov 1 00:21:36.027024 kubelet[2072]: I1101 00:21:36.026988 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/562cbd7c3e58142a8c3c7e7f96044b1e-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-ec0975c3e1\" (UID: \"562cbd7c3e58142a8c3c7e7f96044b1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:36.029629 kubelet[2072]: I1101 00:21:36.027030 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/562cbd7c3e58142a8c3c7e7f96044b1e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-ec0975c3e1\" (UID: \"562cbd7c3e58142a8c3c7e7f96044b1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:36.029629 kubelet[2072]: I1101 00:21:36.027050 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/562cbd7c3e58142a8c3c7e7f96044b1e-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-ec0975c3e1\" (UID: \"562cbd7c3e58142a8c3c7e7f96044b1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:36.029629 kubelet[2072]: I1101 00:21:36.027064 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/562cbd7c3e58142a8c3c7e7f96044b1e-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-ec0975c3e1\" (UID: \"562cbd7c3e58142a8c3c7e7f96044b1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:36.029629 kubelet[2072]: I1101 00:21:36.027081 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/562cbd7c3e58142a8c3c7e7f96044b1e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-ec0975c3e1\" (UID: \"562cbd7c3e58142a8c3c7e7f96044b1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:36.029629 kubelet[2072]: I1101 00:21:36.027096 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80ac166ceeb064a078bb75921ed7e322-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-ec0975c3e1\" (UID: \"80ac166ceeb064a078bb75921ed7e322\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:36.029817 kubelet[2072]: I1101 00:21:36.027110 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80ac166ceeb064a078bb75921ed7e322-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-ec0975c3e1\" (UID: \"80ac166ceeb064a078bb75921ed7e322\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:36.029817 kubelet[2072]: I1101 00:21:36.027124 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80ac166ceeb064a078bb75921ed7e322-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-ec0975c3e1\" (UID: \"80ac166ceeb064a078bb75921ed7e322\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:36.032373 kubelet[2072]: E1101 00:21:36.032338 2072 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-ec0975c3e1\" not found" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:36.038312 systemd[1]: Created slice kubepods-burstable-pod562cbd7c3e58142a8c3c7e7f96044b1e.slice. Nov 1 00:21:36.040251 kubelet[2072]: E1101 00:21:36.040216 2072 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-ec0975c3e1\" not found" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:36.049376 systemd[1]: Created slice kubepods-burstable-podd5e69aefbc1489fe413ddce99de11267.slice. Nov 1 00:21:36.051433 kubelet[2072]: E1101 00:21:36.051398 2072 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-ec0975c3e1\" not found" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:36.127447 kubelet[2072]: I1101 00:21:36.127373 2072 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5e69aefbc1489fe413ddce99de11267-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-ec0975c3e1\" (UID: \"d5e69aefbc1489fe413ddce99de11267\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:36.165781 kubelet[2072]: I1101 00:21:36.165746 2072 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:36.166090 kubelet[2072]: E1101 00:21:36.166065 2072 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.48:6443/api/v1/nodes\": dial tcp 10.200.20.48:6443: connect: connection refused" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:36.225865 kubelet[2072]: E1101 00:21:36.225832 2072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-ec0975c3e1?timeout=10s\": dial tcp 10.200.20.48:6443: connect: connection refused" interval="800ms" Nov 1 00:21:36.342245 env[1490]: time="2025-11-01T00:21:36.341976514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-ec0975c3e1,Uid:80ac166ceeb064a078bb75921ed7e322,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:36.347426 env[1490]: time="2025-11-01T00:21:36.347386563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-ec0975c3e1,Uid:562cbd7c3e58142a8c3c7e7f96044b1e,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:36.360711 env[1490]: time="2025-11-01T00:21:36.360556193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-ec0975c3e1,Uid:d5e69aefbc1489fe413ddce99de11267,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:36.433505 kubelet[2072]: E1101 00:21:36.433213 2072 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:21:36.568477 kubelet[2072]: I1101 00:21:36.568452 2072 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:36.568980 kubelet[2072]: E1101 00:21:36.568952 2072 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.48:6443/api/v1/nodes\": dial tcp 10.200.20.48:6443: connect: connection refused" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:36.697123 kubelet[2072]: E1101 00:21:36.696896 2072 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:21:36.743810 kubelet[2072]: E1101 00:21:36.743778 2072 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:21:36.957395 kubelet[2072]: E1101 00:21:36.957160 2072 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-ec0975c3e1&limit=500&resourceVersion=0\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:21:36.993186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3673573748.mount: Deactivated successfully. Nov 1 00:21:37.021859 env[1490]: time="2025-11-01T00:21:37.021816723Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:37.027050 kubelet[2072]: E1101 00:21:37.027010 2072 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-ec0975c3e1?timeout=10s\": dial tcp 10.200.20.48:6443: connect: connection refused" interval="1.6s" Nov 1 00:21:37.039596 env[1490]: time="2025-11-01T00:21:37.039557264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:37.049729 env[1490]: time="2025-11-01T00:21:37.049673038Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:37.055064 env[1490]: time="2025-11-01T00:21:37.055028422Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:37.059802 env[1490]: time="2025-11-01T00:21:37.059759574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:37.069005 env[1490]: time="2025-11-01T00:21:37.068957019Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:37.073492 env[1490]: time="2025-11-01T00:21:37.073447670Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:37.079728 env[1490]: time="2025-11-01T00:21:37.079675426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:37.089360 env[1490]: time="2025-11-01T00:21:37.089306517Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:37.093242 env[1490]: time="2025-11-01T00:21:37.093200255Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:37.100676 env[1490]: time="2025-11-01T00:21:37.100632117Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:37.105034 env[1490]: time="2025-11-01T00:21:37.104989098Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:37.161125 env[1490]: time="2025-11-01T00:21:37.160259563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:37.161125 env[1490]: time="2025-11-01T00:21:37.160303319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:37.161125 env[1490]: time="2025-11-01T00:21:37.160314878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:37.161125 env[1490]: time="2025-11-01T00:21:37.160633854Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/690596bff5f7a5f6c4451e7e003a04872f5a0d9dfb76b068ddd7f5a653dd97ec pid=2116 runtime=io.containerd.runc.v2 Nov 1 00:21:37.182536 systemd[1]: Started cri-containerd-690596bff5f7a5f6c4451e7e003a04872f5a0d9dfb76b068ddd7f5a653dd97ec.scope. Nov 1 00:21:37.197005 env[1490]: time="2025-11-01T00:21:37.196899875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:37.197005 env[1490]: time="2025-11-01T00:21:37.196944351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:37.197005 env[1490]: time="2025-11-01T00:21:37.196954991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:37.197215 env[1490]: time="2025-11-01T00:21:37.197160055Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2140a8dab4de402d40dfb55df54d0e88252c57be5c0a7b20858cdf018d9ac70 pid=2149 runtime=io.containerd.runc.v2 Nov 1 00:21:37.220534 systemd[1]: Started cri-containerd-f2140a8dab4de402d40dfb55df54d0e88252c57be5c0a7b20858cdf018d9ac70.scope. Nov 1 00:21:37.232346 env[1490]: time="2025-11-01T00:21:37.232142296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:37.232346 env[1490]: time="2025-11-01T00:21:37.232191972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:37.232346 env[1490]: time="2025-11-01T00:21:37.232202931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:37.233033 env[1490]: time="2025-11-01T00:21:37.232942154Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0156ea9fd0f927110792f5452a2a46db4701648a58ef22f8c297b80a82188730 pid=2177 runtime=io.containerd.runc.v2 Nov 1 00:21:37.237643 env[1490]: time="2025-11-01T00:21:37.237601112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-ec0975c3e1,Uid:80ac166ceeb064a078bb75921ed7e322,Namespace:kube-system,Attempt:0,} returns sandbox id \"690596bff5f7a5f6c4451e7e003a04872f5a0d9dfb76b068ddd7f5a653dd97ec\"" Nov 1 00:21:37.249849 systemd[1]: Started cri-containerd-0156ea9fd0f927110792f5452a2a46db4701648a58ef22f8c297b80a82188730.scope. Nov 1 00:21:37.253175 env[1490]: time="2025-11-01T00:21:37.253135784Z" level=info msg="CreateContainer within sandbox \"690596bff5f7a5f6c4451e7e003a04872f5a0d9dfb76b068ddd7f5a653dd97ec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:21:37.273231 env[1490]: time="2025-11-01T00:21:37.273185066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-ec0975c3e1,Uid:562cbd7c3e58142a8c3c7e7f96044b1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2140a8dab4de402d40dfb55df54d0e88252c57be5c0a7b20858cdf018d9ac70\"" Nov 1 00:21:37.285180 env[1490]: time="2025-11-01T00:21:37.285137977Z" level=info msg="CreateContainer within sandbox \"f2140a8dab4de402d40dfb55df54d0e88252c57be5c0a7b20858cdf018d9ac70\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:21:37.296914 env[1490]: time="2025-11-01T00:21:37.296869465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-ec0975c3e1,Uid:d5e69aefbc1489fe413ddce99de11267,Namespace:kube-system,Attempt:0,} returns sandbox id \"0156ea9fd0f927110792f5452a2a46db4701648a58ef22f8c297b80a82188730\"" Nov 1 00:21:37.307888 env[1490]: time="2025-11-01T00:21:37.307838173Z" level=info msg="CreateContainer within sandbox \"0156ea9fd0f927110792f5452a2a46db4701648a58ef22f8c297b80a82188730\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:21:37.317141 env[1490]: time="2025-11-01T00:21:37.317081814Z" level=info msg="CreateContainer within sandbox \"690596bff5f7a5f6c4451e7e003a04872f5a0d9dfb76b068ddd7f5a653dd97ec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f90914ae1203217c5933a8c6a06716906b45866984aece2debc948207697c8be\"" Nov 1 00:21:37.317874 env[1490]: time="2025-11-01T00:21:37.317847075Z" level=info msg="StartContainer for \"f90914ae1203217c5933a8c6a06716906b45866984aece2debc948207697c8be\"" Nov 1 00:21:37.342965 systemd[1]: Started cri-containerd-f90914ae1203217c5933a8c6a06716906b45866984aece2debc948207697c8be.scope. Nov 1 00:21:37.354633 env[1490]: time="2025-11-01T00:21:37.354584779Z" level=info msg="CreateContainer within sandbox \"f2140a8dab4de402d40dfb55df54d0e88252c57be5c0a7b20858cdf018d9ac70\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aad507ddb91b9a369deaee40ed06a2379b7aa065f69a45d55ea3af1beaa7a85a\"" Nov 1 00:21:37.355477 env[1490]: time="2025-11-01T00:21:37.355445472Z" level=info msg="StartContainer for \"aad507ddb91b9a369deaee40ed06a2379b7aa065f69a45d55ea3af1beaa7a85a\"" Nov 1 00:21:37.372050 kubelet[2072]: I1101 00:21:37.371605 2072 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:37.372050 kubelet[2072]: E1101 00:21:37.371994 2072 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.48:6443/api/v1/nodes\": dial tcp 10.200.20.48:6443: connect: connection refused" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:37.373263 env[1490]: time="2025-11-01T00:21:37.373207492Z" level=info msg="CreateContainer within sandbox \"0156ea9fd0f927110792f5452a2a46db4701648a58ef22f8c297b80a82188730\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6d433e07424156d8a8637a7224d45a30eda94220fc60b42bcb483ee82149940a\"" Nov 1 00:21:37.375389 systemd[1]: Started cri-containerd-aad507ddb91b9a369deaee40ed06a2379b7aa065f69a45d55ea3af1beaa7a85a.scope. Nov 1 00:21:37.378838 env[1490]: time="2025-11-01T00:21:37.378803537Z" level=info msg="StartContainer for \"6d433e07424156d8a8637a7224d45a30eda94220fc60b42bcb483ee82149940a\"" Nov 1 00:21:37.412480 env[1490]: time="2025-11-01T00:21:37.412435603Z" level=info msg="StartContainer for \"f90914ae1203217c5933a8c6a06716906b45866984aece2debc948207697c8be\" returns successfully" Nov 1 00:21:37.422679 systemd[1]: Started cri-containerd-6d433e07424156d8a8637a7224d45a30eda94220fc60b42bcb483ee82149940a.scope. Nov 1 00:21:37.432299 env[1490]: time="2025-11-01T00:21:37.432248943Z" level=info msg="StartContainer for \"aad507ddb91b9a369deaee40ed06a2379b7aa065f69a45d55ea3af1beaa7a85a\" returns successfully" Nov 1 00:21:37.488112 env[1490]: time="2025-11-01T00:21:37.487993170Z" level=info msg="StartContainer for \"6d433e07424156d8a8637a7224d45a30eda94220fc60b42bcb483ee82149940a\" returns successfully" Nov 1 00:21:37.719091 kubelet[2072]: E1101 00:21:37.719065 2072 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-ec0975c3e1\" not found" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:37.721079 kubelet[2072]: E1101 00:21:37.721054 2072 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-ec0975c3e1\" not found" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:37.722978 kubelet[2072]: E1101 00:21:37.722955 2072 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-ec0975c3e1\" not found" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:37.988936 systemd[1]: run-containerd-runc-k8s.io-690596bff5f7a5f6c4451e7e003a04872f5a0d9dfb76b068ddd7f5a653dd97ec-runc.wzEqHp.mount: Deactivated successfully. Nov 1 00:21:38.725319 kubelet[2072]: E1101 00:21:38.725292 2072 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-ec0975c3e1\" not found" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:38.726062 kubelet[2072]: E1101 00:21:38.726042 2072 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-ec0975c3e1\" not found" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:38.973918 kubelet[2072]: I1101 00:21:38.973893 2072 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:39.514491 kubelet[2072]: I1101 00:21:39.514460 2072 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:39.521986 kubelet[2072]: I1101 00:21:39.521957 2072 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:39.598946 kubelet[2072]: I1101 00:21:39.598906 2072 apiserver.go:52] "Watching apiserver" Nov 1 00:21:39.623856 kubelet[2072]: I1101 00:21:39.623818 2072 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:21:39.656926 kubelet[2072]: E1101 00:21:39.656891 2072 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-ec0975c3e1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:39.657105 kubelet[2072]: I1101 00:21:39.657089 2072 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:39.659284 kubelet[2072]: E1101 00:21:39.659251 2072 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-ec0975c3e1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:39.659400 kubelet[2072]: I1101 00:21:39.659388 2072 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:39.661264 kubelet[2072]: E1101 00:21:39.661228 2072 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-ec0975c3e1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:41.561976 kubelet[2072]: I1101 00:21:41.561940 2072 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:41.571010 kubelet[2072]: I1101 00:21:41.570975 2072 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:21:42.182038 systemd[1]: Reloading. Nov 1 00:21:42.266622 /usr/lib/systemd/system-generators/torcx-generator[2373]: time="2025-11-01T00:21:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:21:42.266655 /usr/lib/systemd/system-generators/torcx-generator[2373]: time="2025-11-01T00:21:42Z" level=info msg="torcx already run" Nov 1 00:21:42.348828 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:21:42.348847 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:21:42.366154 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:21:42.486523 systemd[1]: Stopping kubelet.service... Nov 1 00:21:42.511199 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:21:42.512775 systemd[1]: Stopped kubelet.service. Nov 1 00:21:42.512841 systemd[1]: kubelet.service: Consumed 1.218s CPU time. Nov 1 00:21:42.514782 systemd[1]: Starting kubelet.service... Nov 1 00:21:42.615380 systemd[1]: Started kubelet.service. Nov 1 00:21:42.678375 kubelet[2437]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:21:42.678375 kubelet[2437]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:21:42.678723 kubelet[2437]: I1101 00:21:42.678417 2437 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:21:42.684514 kubelet[2437]: I1101 00:21:42.684469 2437 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:21:42.684514 kubelet[2437]: I1101 00:21:42.684506 2437 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:21:42.684736 kubelet[2437]: I1101 00:21:42.684537 2437 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:21:42.684736 kubelet[2437]: I1101 00:21:42.684545 2437 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:21:42.685179 kubelet[2437]: I1101 00:21:42.685160 2437 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:21:42.687569 kubelet[2437]: I1101 00:21:42.687545 2437 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 00:21:42.690405 kubelet[2437]: I1101 00:21:42.690377 2437 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:21:42.693803 kubelet[2437]: E1101 00:21:42.693737 2437 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:21:42.693970 kubelet[2437]: I1101 00:21:42.693957 2437 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:21:42.696815 kubelet[2437]: I1101 00:21:42.696795 2437 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:21:42.697105 kubelet[2437]: I1101 00:21:42.697079 2437 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:21:42.697313 kubelet[2437]: I1101 00:21:42.697165 2437 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-ec0975c3e1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:21:42.697424 kubelet[2437]: I1101 00:21:42.697413 2437 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:21:42.697481 kubelet[2437]: I1101 00:21:42.697473 2437 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:21:42.697554 kubelet[2437]: I1101 00:21:42.697545 2437 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:21:42.698538 kubelet[2437]: I1101 00:21:42.698514 2437 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:21:42.698673 kubelet[2437]: I1101 00:21:42.698655 2437 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:21:42.698733 kubelet[2437]: I1101 00:21:42.698680 2437 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:21:42.698733 kubelet[2437]: I1101 00:21:42.698729 2437 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:21:42.698783 kubelet[2437]: I1101 00:21:42.698741 2437 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:21:42.701901 kubelet[2437]: I1101 00:21:42.701882 2437 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:21:42.702526 kubelet[2437]: I1101 00:21:42.702507 2437 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:21:42.703795 kubelet[2437]: I1101 00:21:42.703764 2437 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:21:42.712138 kubelet[2437]: I1101 00:21:42.709173 2437 server.go:1262] "Started kubelet" Nov 1 00:21:42.715555 kubelet[2437]: I1101 00:21:42.715187 2437 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:21:42.715555 kubelet[2437]: I1101 00:21:42.715264 2437 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:21:42.715555 kubelet[2437]: I1101 00:21:42.715502 2437 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:21:42.715708 kubelet[2437]: I1101 00:21:42.715558 2437 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:21:42.719307 kubelet[2437]: I1101 00:21:42.719280 2437 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:21:42.745839 kubelet[2437]: I1101 00:21:42.745796 2437 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:21:42.747568 kubelet[2437]: I1101 00:21:42.747549 2437 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:21:42.747992 kubelet[2437]: E1101 00:21:42.747969 2437 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-ec0975c3e1\" not found" Nov 1 00:21:42.749341 kubelet[2437]: I1101 00:21:42.749319 2437 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:21:42.749550 kubelet[2437]: I1101 00:21:42.749539 2437 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:21:42.751005 kubelet[2437]: I1101 00:21:42.750972 2437 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:21:42.751920 kubelet[2437]: I1101 00:21:42.751903 2437 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:21:42.752021 kubelet[2437]: I1101 00:21:42.752011 2437 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:21:42.752091 kubelet[2437]: I1101 00:21:42.752082 2437 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:21:42.752194 kubelet[2437]: E1101 00:21:42.752173 2437 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:21:42.760354 kubelet[2437]: I1101 00:21:42.739985 2437 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:21:42.765115 kubelet[2437]: I1101 00:21:42.765089 2437 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:21:42.767731 kubelet[2437]: I1101 00:21:42.765314 2437 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:21:42.772857 kubelet[2437]: E1101 00:21:42.771422 2437 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:21:42.773615 kubelet[2437]: I1101 00:21:42.773577 2437 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:21:42.821852 kubelet[2437]: I1101 00:21:42.821821 2437 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:21:42.822025 kubelet[2437]: I1101 00:21:42.822010 2437 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:21:42.822087 kubelet[2437]: I1101 00:21:42.822078 2437 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:21:42.822294 kubelet[2437]: I1101 00:21:42.822282 2437 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:21:42.822377 kubelet[2437]: I1101 00:21:42.822351 2437 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:21:42.822431 kubelet[2437]: I1101 00:21:42.822423 2437 policy_none.go:49] "None policy: Start" Nov 1 00:21:42.822488 kubelet[2437]: I1101 00:21:42.822479 2437 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:21:42.822554 kubelet[2437]: I1101 00:21:42.822537 2437 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:21:42.822775 kubelet[2437]: I1101 00:21:42.822760 2437 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 1 00:21:42.822850 kubelet[2437]: I1101 00:21:42.822841 2437 policy_none.go:47] "Start" Nov 1 00:21:42.835116 kubelet[2437]: E1101 00:21:42.835092 2437 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:21:42.835659 kubelet[2437]: I1101 00:21:42.835645 2437 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:21:42.836138 kubelet[2437]: I1101 00:21:42.836098 2437 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:21:42.836467 kubelet[2437]: I1101 00:21:42.836452 2437 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:21:42.840385 kubelet[2437]: E1101 00:21:42.840368 2437 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:21:42.852873 kubelet[2437]: I1101 00:21:42.852842 2437 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:42.853121 kubelet[2437]: I1101 00:21:42.853088 2437 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:42.853220 kubelet[2437]: I1101 00:21:42.852954 2437 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:42.877581 kubelet[2437]: I1101 00:21:42.877527 2437 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:21:42.877820 kubelet[2437]: I1101 00:21:42.877798 2437 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:21:42.877868 kubelet[2437]: E1101 00:21:42.877845 2437 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-ec0975c3e1\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:42.877944 kubelet[2437]: I1101 00:21:42.877929 2437 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:21:42.938888 kubelet[2437]: I1101 00:21:42.938857 2437 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:42.951051 kubelet[2437]: I1101 00:21:42.951018 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/562cbd7c3e58142a8c3c7e7f96044b1e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-ec0975c3e1\" (UID: \"562cbd7c3e58142a8c3c7e7f96044b1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:42.951278 kubelet[2437]: I1101 00:21:42.951262 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/562cbd7c3e58142a8c3c7e7f96044b1e-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-ec0975c3e1\" (UID: \"562cbd7c3e58142a8c3c7e7f96044b1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:42.951356 kubelet[2437]: I1101 00:21:42.951344 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/562cbd7c3e58142a8c3c7e7f96044b1e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-ec0975c3e1\" (UID: \"562cbd7c3e58142a8c3c7e7f96044b1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:42.951439 kubelet[2437]: I1101 00:21:42.951424 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/562cbd7c3e58142a8c3c7e7f96044b1e-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-ec0975c3e1\" (UID: \"562cbd7c3e58142a8c3c7e7f96044b1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:42.951510 kubelet[2437]: I1101 00:21:42.951497 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5e69aefbc1489fe413ddce99de11267-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-ec0975c3e1\" (UID: \"d5e69aefbc1489fe413ddce99de11267\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:42.951589 kubelet[2437]: I1101 00:21:42.951576 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80ac166ceeb064a078bb75921ed7e322-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-ec0975c3e1\" (UID: \"80ac166ceeb064a078bb75921ed7e322\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:42.951652 kubelet[2437]: I1101 00:21:42.951640 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80ac166ceeb064a078bb75921ed7e322-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-ec0975c3e1\" (UID: \"80ac166ceeb064a078bb75921ed7e322\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:42.951748 kubelet[2437]: I1101 00:21:42.951734 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80ac166ceeb064a078bb75921ed7e322-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-ec0975c3e1\" (UID: \"80ac166ceeb064a078bb75921ed7e322\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:42.951815 kubelet[2437]: I1101 00:21:42.951802 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/562cbd7c3e58142a8c3c7e7f96044b1e-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-ec0975c3e1\" (UID: \"562cbd7c3e58142a8c3c7e7f96044b1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:42.955071 kubelet[2437]: I1101 00:21:42.955043 2437 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:42.955295 kubelet[2437]: I1101 00:21:42.955284 2437 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:43.246096 sudo[2472]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 1 00:21:43.246314 sudo[2472]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Nov 1 00:21:43.699479 kubelet[2437]: I1101 00:21:43.699385 2437 apiserver.go:52] "Watching apiserver" Nov 1 00:21:43.745300 sudo[2472]: pam_unix(sudo:session): session closed for user root Nov 1 00:21:43.749771 kubelet[2437]: I1101 00:21:43.749730 2437 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:21:43.799434 kubelet[2437]: I1101 00:21:43.799405 2437 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:43.799954 kubelet[2437]: I1101 00:21:43.799937 2437 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:43.813809 kubelet[2437]: I1101 00:21:43.813779 2437 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:21:43.814023 kubelet[2437]: E1101 00:21:43.814007 2437 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-ec0975c3e1\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:43.815882 kubelet[2437]: I1101 00:21:43.815861 2437 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:21:43.816038 kubelet[2437]: E1101 00:21:43.816024 2437 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-ec0975c3e1\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-ec0975c3e1" Nov 1 00:21:43.826453 kubelet[2437]: I1101 00:21:43.826398 2437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-ec0975c3e1" podStartSLOduration=1.82638312 podStartE2EDuration="1.82638312s" podCreationTimestamp="2025-11-01 00:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:21:43.824032678 +0000 UTC m=+1.199552389" watchObservedRunningTime="2025-11-01 00:21:43.82638312 +0000 UTC m=+1.201902791" Nov 1 00:21:43.849816 kubelet[2437]: I1101 00:21:43.849753 2437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-ec0975c3e1" podStartSLOduration=1.8497345950000001 podStartE2EDuration="1.849734595s" podCreationTimestamp="2025-11-01 00:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:21:43.838715454 +0000 UTC m=+1.214235125" watchObservedRunningTime="2025-11-01 00:21:43.849734595 +0000 UTC m=+1.225254266" Nov 1 00:21:45.931358 sudo[1769]: pam_unix(sudo:session): session closed for user root Nov 1 00:21:46.010340 sshd[1766]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:46.014244 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:21:46.014428 systemd[1]: session-7.scope: Consumed 7.606s CPU time. Nov 1 00:21:46.015207 systemd-logind[1467]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:21:46.015315 systemd[1]: sshd@4-10.200.20.48:22-10.200.16.10:46444.service: Deactivated successfully. Nov 1 00:21:46.016573 systemd-logind[1467]: Removed session 7. Nov 1 00:21:46.766867 kubelet[2437]: I1101 00:21:46.766841 2437 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:21:46.767569 env[1490]: time="2025-11-01T00:21:46.767529933Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:21:46.767844 kubelet[2437]: I1101 00:21:46.767761 2437 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:21:47.499211 kubelet[2437]: I1101 00:21:47.499134 2437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-ec0975c3e1" podStartSLOduration=6.499118896 podStartE2EDuration="6.499118896s" podCreationTimestamp="2025-11-01 00:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:21:43.849677639 +0000 UTC m=+1.225197310" watchObservedRunningTime="2025-11-01 00:21:47.499118896 +0000 UTC m=+4.874638567" Nov 1 00:21:47.837966 systemd[1]: Created slice kubepods-besteffort-pod4988df46_c670_4f10_a2c2_e91b9c39d126.slice. Nov 1 00:21:47.847882 systemd[1]: Created slice kubepods-burstable-pod9e0469e6_04e0_472c_8850_4d766fdef3e0.slice. Nov 1 00:21:47.880145 kubelet[2437]: I1101 00:21:47.880107 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-hostproc\") pod \"cilium-m4qzh\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " pod="kube-system/cilium-m4qzh" Nov 1 00:21:47.880534 kubelet[2437]: I1101 00:21:47.880518 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-cni-path\") pod \"cilium-m4qzh\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " pod="kube-system/cilium-m4qzh" Nov 1 00:21:47.880603 kubelet[2437]: I1101 00:21:47.880590 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-lib-modules\") pod \"cilium-m4qzh\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " pod="kube-system/cilium-m4qzh" Nov 1 00:21:47.880678 kubelet[2437]: I1101 00:21:47.880658 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e0469e6-04e0-472c-8850-4d766fdef3e0-hubble-tls\") pod \"cilium-m4qzh\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " pod="kube-system/cilium-m4qzh" Nov 1 00:21:47.880768 kubelet[2437]: I1101 00:21:47.880755 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e0469e6-04e0-472c-8850-4d766fdef3e0-cilium-config-path\") pod \"cilium-m4qzh\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " pod="kube-system/cilium-m4qzh" Nov 1 00:21:47.880842 kubelet[2437]: I1101 00:21:47.880829 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-host-proc-sys-kernel\") pod \"cilium-m4qzh\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " pod="kube-system/cilium-m4qzh" Nov 1 00:21:47.880917 kubelet[2437]: I1101 00:21:47.880904 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ddxv\" (UniqueName: \"kubernetes.io/projected/9e0469e6-04e0-472c-8850-4d766fdef3e0-kube-api-access-4ddxv\") pod \"cilium-m4qzh\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " pod="kube-system/cilium-m4qzh" Nov 1 00:21:47.880985 kubelet[2437]: I1101 00:21:47.880971 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4988df46-c670-4f10-a2c2-e91b9c39d126-kube-proxy\") pod \"kube-proxy-m9bt6\" (UID: \"4988df46-c670-4f10-a2c2-e91b9c39d126\") " pod="kube-system/kube-proxy-m9bt6" Nov 1 00:21:47.881054 kubelet[2437]: I1101 00:21:47.881043 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4988df46-c670-4f10-a2c2-e91b9c39d126-xtables-lock\") pod \"kube-proxy-m9bt6\" (UID: \"4988df46-c670-4f10-a2c2-e91b9c39d126\") " pod="kube-system/kube-proxy-m9bt6" Nov 1 00:21:47.881120 kubelet[2437]: I1101 00:21:47.881107 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n26j6\" (UniqueName: \"kubernetes.io/projected/4988df46-c670-4f10-a2c2-e91b9c39d126-kube-api-access-n26j6\") pod \"kube-proxy-m9bt6\" (UID: \"4988df46-c670-4f10-a2c2-e91b9c39d126\") " pod="kube-system/kube-proxy-m9bt6" Nov 1 00:21:47.881189 kubelet[2437]: I1101 00:21:47.881173 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-bpf-maps\") pod \"cilium-m4qzh\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " pod="kube-system/cilium-m4qzh" Nov 1 00:21:47.881270 kubelet[2437]: I1101 00:21:47.881255 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-cilium-cgroup\") pod \"cilium-m4qzh\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " pod="kube-system/cilium-m4qzh" Nov 1 00:21:47.881340 kubelet[2437]: I1101 00:21:47.881328 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4988df46-c670-4f10-a2c2-e91b9c39d126-lib-modules\") pod \"kube-proxy-m9bt6\" (UID: \"4988df46-c670-4f10-a2c2-e91b9c39d126\") " pod="kube-system/kube-proxy-m9bt6" Nov 1 00:21:47.881412 kubelet[2437]: I1101 00:21:47.881400 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-etc-cni-netd\") pod \"cilium-m4qzh\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " pod="kube-system/cilium-m4qzh" Nov 1 00:21:47.881481 kubelet[2437]: I1101 00:21:47.881469 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-xtables-lock\") pod \"cilium-m4qzh\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " pod="kube-system/cilium-m4qzh" Nov 1 00:21:47.881555 kubelet[2437]: I1101 00:21:47.881542 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e0469e6-04e0-472c-8850-4d766fdef3e0-clustermesh-secrets\") pod \"cilium-m4qzh\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " pod="kube-system/cilium-m4qzh" Nov 1 00:21:47.881629 kubelet[2437]: I1101 00:21:47.881616 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-host-proc-sys-net\") pod \"cilium-m4qzh\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " pod="kube-system/cilium-m4qzh" Nov 1 00:21:47.881740 kubelet[2437]: I1101 00:21:47.881706 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-cilium-run\") pod \"cilium-m4qzh\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " pod="kube-system/cilium-m4qzh" Nov 1 00:21:47.958993 systemd[1]: Created slice kubepods-besteffort-pode4f659f8_0a56_40fc_8345_105772c07b52.slice. Nov 1 00:21:47.982142 kubelet[2437]: I1101 00:21:47.982104 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llzrd\" (UniqueName: \"kubernetes.io/projected/e4f659f8-0a56-40fc-8345-105772c07b52-kube-api-access-llzrd\") pod \"cilium-operator-6f9c7c5859-cz9w7\" (UID: \"e4f659f8-0a56-40fc-8345-105772c07b52\") " pod="kube-system/cilium-operator-6f9c7c5859-cz9w7" Nov 1 00:21:47.982446 kubelet[2437]: I1101 00:21:47.982427 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4f659f8-0a56-40fc-8345-105772c07b52-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-cz9w7\" (UID: \"e4f659f8-0a56-40fc-8345-105772c07b52\") " pod="kube-system/cilium-operator-6f9c7c5859-cz9w7" Nov 1 00:21:47.983026 kubelet[2437]: I1101 00:21:47.983002 2437 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:21:48.155252 env[1490]: time="2025-11-01T00:21:48.154641888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m9bt6,Uid:4988df46-c670-4f10-a2c2-e91b9c39d126,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:48.160325 env[1490]: time="2025-11-01T00:21:48.160094323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m4qzh,Uid:9e0469e6-04e0-472c-8850-4d766fdef3e0,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:48.209553 env[1490]: time="2025-11-01T00:21:48.205560497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:48.209553 env[1490]: time="2025-11-01T00:21:48.205606414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:48.209553 env[1490]: time="2025-11-01T00:21:48.205630133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:48.209553 env[1490]: time="2025-11-01T00:21:48.205980032Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f3ac2e8a2f7dbc76cbe459ec6fc0d9c205d5afe1c789f52c5da9d0dcb485873 pid=2522 runtime=io.containerd.runc.v2 Nov 1 00:21:48.219256 env[1490]: time="2025-11-01T00:21:48.219151808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:48.219256 env[1490]: time="2025-11-01T00:21:48.219207644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:48.219256 env[1490]: time="2025-11-01T00:21:48.219218964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:48.219639 env[1490]: time="2025-11-01T00:21:48.219583222Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e pid=2547 runtime=io.containerd.runc.v2 Nov 1 00:21:48.224252 systemd[1]: Started cri-containerd-3f3ac2e8a2f7dbc76cbe459ec6fc0d9c205d5afe1c789f52c5da9d0dcb485873.scope. Nov 1 00:21:48.238245 systemd[1]: Started cri-containerd-379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e.scope. Nov 1 00:21:48.263992 env[1490]: time="2025-11-01T00:21:48.263938422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m9bt6,Uid:4988df46-c670-4f10-a2c2-e91b9c39d126,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f3ac2e8a2f7dbc76cbe459ec6fc0d9c205d5afe1c789f52c5da9d0dcb485873\"" Nov 1 00:21:48.270891 env[1490]: time="2025-11-01T00:21:48.270839331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-cz9w7,Uid:e4f659f8-0a56-40fc-8345-105772c07b52,Namespace:kube-system,Attempt:0,}" Nov 1 00:21:48.271593 env[1490]: time="2025-11-01T00:21:48.271560888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m4qzh,Uid:9e0469e6-04e0-472c-8850-4d766fdef3e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e\"" Nov 1 00:21:48.273619 env[1490]: time="2025-11-01T00:21:48.273593927Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 1 00:21:48.278664 env[1490]: time="2025-11-01T00:21:48.278602669Z" level=info msg="CreateContainer within sandbox \"3f3ac2e8a2f7dbc76cbe459ec6fc0d9c205d5afe1c789f52c5da9d0dcb485873\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:21:48.351389 env[1490]: time="2025-11-01T00:21:48.348486788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:21:48.351389 env[1490]: time="2025-11-01T00:21:48.348524026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:21:48.351389 env[1490]: time="2025-11-01T00:21:48.348534105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:21:48.351389 env[1490]: time="2025-11-01T00:21:48.348757892Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be4f11fb042f2b455092deb5d414b483a0bbe0a028864106a7739d280ab10827 pid=2607 runtime=io.containerd.runc.v2 Nov 1 00:21:48.354261 env[1490]: time="2025-11-01T00:21:48.354212647Z" level=info msg="CreateContainer within sandbox \"3f3ac2e8a2f7dbc76cbe459ec6fc0d9c205d5afe1c789f52c5da9d0dcb485873\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e26ee732934e8b2257685723d5aca201274527a6470418a54b58463e4ae0d70a\"" Nov 1 00:21:48.355966 env[1490]: time="2025-11-01T00:21:48.355927585Z" level=info msg="StartContainer for \"e26ee732934e8b2257685723d5aca201274527a6470418a54b58463e4ae0d70a\"" Nov 1 00:21:48.369073 systemd[1]: Started cri-containerd-be4f11fb042f2b455092deb5d414b483a0bbe0a028864106a7739d280ab10827.scope. Nov 1 00:21:48.386095 systemd[1]: Started cri-containerd-e26ee732934e8b2257685723d5aca201274527a6470418a54b58463e4ae0d70a.scope. Nov 1 00:21:48.414008 env[1490]: time="2025-11-01T00:21:48.413889415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-cz9w7,Uid:e4f659f8-0a56-40fc-8345-105772c07b52,Namespace:kube-system,Attempt:0,} returns sandbox id \"be4f11fb042f2b455092deb5d414b483a0bbe0a028864106a7739d280ab10827\"" Nov 1 00:21:48.447344 env[1490]: time="2025-11-01T00:21:48.447281747Z" level=info msg="StartContainer for \"e26ee732934e8b2257685723d5aca201274527a6470418a54b58463e4ae0d70a\" returns successfully" Nov 1 00:21:48.826605 kubelet[2437]: I1101 00:21:48.826537 2437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m9bt6" podStartSLOduration=1.8265104110000001 podStartE2EDuration="1.826510411s" podCreationTimestamp="2025-11-01 00:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:21:48.826480733 +0000 UTC m=+6.202000404" watchObservedRunningTime="2025-11-01 00:21:48.826510411 +0000 UTC m=+6.202030082" Nov 1 00:21:53.132795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3460266500.mount: Deactivated successfully. Nov 1 00:21:55.533761 env[1490]: time="2025-11-01T00:21:55.533715391Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:55.545129 env[1490]: time="2025-11-01T00:21:55.545033575Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:55.553583 env[1490]: time="2025-11-01T00:21:55.553540262Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:55.553846 env[1490]: time="2025-11-01T00:21:55.553819928Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Nov 1 00:21:55.558010 env[1490]: time="2025-11-01T00:21:55.557969037Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 1 00:21:55.564537 env[1490]: time="2025-11-01T00:21:55.564485026Z" level=info msg="CreateContainer within sandbox \"379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:21:55.592476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2959845872.mount: Deactivated successfully. Nov 1 00:21:55.597899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2004153682.mount: Deactivated successfully. Nov 1 00:21:55.615682 env[1490]: time="2025-11-01T00:21:55.615627545Z" level=info msg="CreateContainer within sandbox \"379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10\"" Nov 1 00:21:55.616455 env[1490]: time="2025-11-01T00:21:55.616425864Z" level=info msg="StartContainer for \"3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10\"" Nov 1 00:21:55.635835 systemd[1]: Started cri-containerd-3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10.scope. Nov 1 00:21:55.669599 env[1490]: time="2025-11-01T00:21:55.669550442Z" level=info msg="StartContainer for \"3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10\" returns successfully" Nov 1 00:21:55.673607 systemd[1]: cri-containerd-3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10.scope: Deactivated successfully. Nov 1 00:21:56.590421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10-rootfs.mount: Deactivated successfully. Nov 1 00:21:57.076894 env[1490]: time="2025-11-01T00:21:57.076844643Z" level=info msg="shim disconnected" id=3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10 Nov 1 00:21:57.076894 env[1490]: time="2025-11-01T00:21:57.076889161Z" level=warning msg="cleaning up after shim disconnected" id=3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10 namespace=k8s.io Nov 1 00:21:57.076894 env[1490]: time="2025-11-01T00:21:57.076897441Z" level=info msg="cleaning up dead shim" Nov 1 00:21:57.083921 env[1490]: time="2025-11-01T00:21:57.083867061Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:21:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2857 runtime=io.containerd.runc.v2\n" Nov 1 00:21:57.839756 env[1490]: time="2025-11-01T00:21:57.839684003Z" level=info msg="CreateContainer within sandbox \"379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:21:57.876104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318991868.mount: Deactivated successfully. Nov 1 00:21:57.882729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2344487315.mount: Deactivated successfully. Nov 1 00:21:57.896866 env[1490]: time="2025-11-01T00:21:57.896817620Z" level=info msg="CreateContainer within sandbox \"379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f\"" Nov 1 00:21:57.897446 env[1490]: time="2025-11-01T00:21:57.897410151Z" level=info msg="StartContainer for \"ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f\"" Nov 1 00:21:57.915345 systemd[1]: Started cri-containerd-ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f.scope. Nov 1 00:21:57.951372 env[1490]: time="2025-11-01T00:21:57.951311405Z" level=info msg="StartContainer for \"ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f\" returns successfully" Nov 1 00:21:57.960127 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:21:57.960768 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:21:57.961059 systemd[1]: Stopping systemd-sysctl.service... Nov 1 00:21:57.964348 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:21:57.970742 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:21:57.972989 systemd[1]: cri-containerd-ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f.scope: Deactivated successfully. Nov 1 00:21:58.012730 env[1490]: time="2025-11-01T00:21:58.012667269Z" level=info msg="shim disconnected" id=ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f Nov 1 00:21:58.012730 env[1490]: time="2025-11-01T00:21:58.012725506Z" level=warning msg="cleaning up after shim disconnected" id=ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f namespace=k8s.io Nov 1 00:21:58.012730 env[1490]: time="2025-11-01T00:21:58.012735985Z" level=info msg="cleaning up dead shim" Nov 1 00:21:58.020223 env[1490]: time="2025-11-01T00:21:58.020178150Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:21:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2920 runtime=io.containerd.runc.v2\n" Nov 1 00:21:58.855523 env[1490]: time="2025-11-01T00:21:58.855463394Z" level=info msg="CreateContainer within sandbox \"379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:21:58.873296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f-rootfs.mount: Deactivated successfully. Nov 1 00:21:58.913772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1376189514.mount: Deactivated successfully. Nov 1 00:21:58.919257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2337972198.mount: Deactivated successfully. Nov 1 00:21:58.938666 env[1490]: time="2025-11-01T00:21:58.938617229Z" level=info msg="CreateContainer within sandbox \"379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614\"" Nov 1 00:21:58.939321 env[1490]: time="2025-11-01T00:21:58.939294836Z" level=info msg="StartContainer for \"1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614\"" Nov 1 00:21:58.971382 systemd[1]: Started cri-containerd-1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614.scope. Nov 1 00:21:59.006173 systemd[1]: cri-containerd-1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614.scope: Deactivated successfully. Nov 1 00:21:59.016090 env[1490]: time="2025-11-01T00:21:59.016033752Z" level=info msg="StartContainer for \"1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614\" returns successfully" Nov 1 00:21:59.018805 env[1490]: time="2025-11-01T00:21:59.010001354Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e0469e6_04e0_472c_8850_4d766fdef3e0.slice/cri-containerd-1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614.scope/memory.events\": no such file or directory" Nov 1 00:21:59.282446 env[1490]: time="2025-11-01T00:21:59.282400512Z" level=info msg="shim disconnected" id=1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614 Nov 1 00:21:59.282762 env[1490]: time="2025-11-01T00:21:59.282743616Z" level=warning msg="cleaning up after shim disconnected" id=1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614 namespace=k8s.io Nov 1 00:21:59.282849 env[1490]: time="2025-11-01T00:21:59.282833412Z" level=info msg="cleaning up dead shim" Nov 1 00:21:59.300417 env[1490]: time="2025-11-01T00:21:59.300369553Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:21:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2976 runtime=io.containerd.runc.v2\n" Nov 1 00:21:59.377970 env[1490]: time="2025-11-01T00:21:59.377919771Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:59.384751 env[1490]: time="2025-11-01T00:21:59.384710614Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:59.389320 env[1490]: time="2025-11-01T00:21:59.389280441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:21:59.389678 env[1490]: time="2025-11-01T00:21:59.389643784Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Nov 1 00:21:59.401877 env[1490]: time="2025-11-01T00:21:59.401832254Z" level=info msg="CreateContainer within sandbox \"be4f11fb042f2b455092deb5d414b483a0bbe0a028864106a7739d280ab10827\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 1 00:21:59.436103 env[1490]: time="2025-11-01T00:21:59.436033857Z" level=info msg="CreateContainer within sandbox \"be4f11fb042f2b455092deb5d414b483a0bbe0a028864106a7739d280ab10827\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530\"" Nov 1 00:21:59.436984 env[1490]: time="2025-11-01T00:21:59.436957454Z" level=info msg="StartContainer for \"cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530\"" Nov 1 00:21:59.453439 systemd[1]: Started cri-containerd-cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530.scope. Nov 1 00:21:59.483074 env[1490]: time="2025-11-01T00:21:59.483024263Z" level=info msg="StartContainer for \"cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530\" returns successfully" Nov 1 00:21:59.851580 env[1490]: time="2025-11-01T00:21:59.851525293Z" level=info msg="CreateContainer within sandbox \"379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:21:59.901914 env[1490]: time="2025-11-01T00:21:59.901845863Z" level=info msg="CreateContainer within sandbox \"379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf\"" Nov 1 00:21:59.902902 env[1490]: time="2025-11-01T00:21:59.902845016Z" level=info msg="StartContainer for \"518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf\"" Nov 1 00:21:59.926250 systemd[1]: Started cri-containerd-518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf.scope. Nov 1 00:21:59.929030 kubelet[2437]: I1101 00:21:59.928965 2437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-cz9w7" podStartSLOduration=1.956415422 podStartE2EDuration="12.928947637s" podCreationTimestamp="2025-11-01 00:21:47 +0000 UTC" firstStartedPulling="2025-11-01 00:21:48.418242436 +0000 UTC m=+5.793762107" lastFinishedPulling="2025-11-01 00:21:59.390774651 +0000 UTC m=+16.766294322" observedRunningTime="2025-11-01 00:21:59.865814986 +0000 UTC m=+17.241334657" watchObservedRunningTime="2025-11-01 00:21:59.928947637 +0000 UTC m=+17.304467308" Nov 1 00:21:59.962145 systemd[1]: cri-containerd-518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf.scope: Deactivated successfully. Nov 1 00:21:59.963227 env[1490]: time="2025-11-01T00:21:59.962746939Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e0469e6_04e0_472c_8850_4d766fdef3e0.slice/cri-containerd-518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf.scope/memory.events\": no such file or directory" Nov 1 00:21:59.973383 env[1490]: time="2025-11-01T00:21:59.973323325Z" level=info msg="StartContainer for \"518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf\" returns successfully" Nov 1 00:22:00.035150 env[1490]: time="2025-11-01T00:22:00.035092112Z" level=info msg="shim disconnected" id=518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf Nov 1 00:22:00.035150 env[1490]: time="2025-11-01T00:22:00.035145470Z" level=warning msg="cleaning up after shim disconnected" id=518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf namespace=k8s.io Nov 1 00:22:00.035150 env[1490]: time="2025-11-01T00:22:00.035155070Z" level=info msg="cleaning up dead shim" Nov 1 00:22:00.046315 env[1490]: time="2025-11-01T00:22:00.046253522Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:22:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3068 runtime=io.containerd.runc.v2\n" Nov 1 00:22:00.863799 env[1490]: time="2025-11-01T00:22:00.863747848Z" level=info msg="CreateContainer within sandbox \"379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:22:00.873392 systemd[1]: run-containerd-runc-k8s.io-518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf-runc.QNHO5u.mount: Deactivated successfully. Nov 1 00:22:00.873525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf-rootfs.mount: Deactivated successfully. Nov 1 00:22:00.903066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3121013259.mount: Deactivated successfully. Nov 1 00:22:00.919507 env[1490]: time="2025-11-01T00:22:00.919458260Z" level=info msg="CreateContainer within sandbox \"379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a\"" Nov 1 00:22:00.920378 env[1490]: time="2025-11-01T00:22:00.920335779Z" level=info msg="StartContainer for \"83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a\"" Nov 1 00:22:00.938396 systemd[1]: Started cri-containerd-83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a.scope. Nov 1 00:22:00.984364 env[1490]: time="2025-11-01T00:22:00.984326732Z" level=info msg="StartContainer for \"83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a\" returns successfully" Nov 1 00:22:01.090711 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Nov 1 00:22:01.091742 kubelet[2437]: I1101 00:22:01.091711 2437 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 1 00:22:01.146465 systemd[1]: Created slice kubepods-burstable-podf93fcfd8_485f_4878_8379_8b0600209f39.slice. Nov 1 00:22:01.156113 systemd[1]: Created slice kubepods-burstable-pod9d12e0ec_118d_4916_bb2f_db839ebb2fe2.slice. Nov 1 00:22:01.167042 kubelet[2437]: I1101 00:22:01.166998 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fh56\" (UniqueName: \"kubernetes.io/projected/9d12e0ec-118d-4916-bb2f-db839ebb2fe2-kube-api-access-8fh56\") pod \"coredns-66bc5c9577-2mskw\" (UID: \"9d12e0ec-118d-4916-bb2f-db839ebb2fe2\") " pod="kube-system/coredns-66bc5c9577-2mskw" Nov 1 00:22:01.167329 kubelet[2437]: I1101 00:22:01.167290 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5z9w\" (UniqueName: \"kubernetes.io/projected/f93fcfd8-485f-4878-8379-8b0600209f39-kube-api-access-r5z9w\") pod \"coredns-66bc5c9577-hlkvt\" (UID: \"f93fcfd8-485f-4878-8379-8b0600209f39\") " pod="kube-system/coredns-66bc5c9577-hlkvt" Nov 1 00:22:01.167442 kubelet[2437]: I1101 00:22:01.167430 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f93fcfd8-485f-4878-8379-8b0600209f39-config-volume\") pod \"coredns-66bc5c9577-hlkvt\" (UID: \"f93fcfd8-485f-4878-8379-8b0600209f39\") " pod="kube-system/coredns-66bc5c9577-hlkvt" Nov 1 00:22:01.167561 kubelet[2437]: I1101 00:22:01.167549 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d12e0ec-118d-4916-bb2f-db839ebb2fe2-config-volume\") pod \"coredns-66bc5c9577-2mskw\" (UID: \"9d12e0ec-118d-4916-bb2f-db839ebb2fe2\") " pod="kube-system/coredns-66bc5c9577-2mskw" Nov 1 00:22:01.464702 env[1490]: time="2025-11-01T00:22:01.464551556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hlkvt,Uid:f93fcfd8-485f-4878-8379-8b0600209f39,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:01.472766 env[1490]: time="2025-11-01T00:22:01.472726829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2mskw,Uid:9d12e0ec-118d-4916-bb2f-db839ebb2fe2,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:01.905723 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Nov 1 00:22:03.566372 systemd-networkd[1648]: cilium_host: Link UP Nov 1 00:22:03.566492 systemd-networkd[1648]: cilium_net: Link UP Nov 1 00:22:03.566495 systemd-networkd[1648]: cilium_net: Gained carrier Nov 1 00:22:03.566602 systemd-networkd[1648]: cilium_host: Gained carrier Nov 1 00:22:03.574756 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Nov 1 00:22:03.574928 systemd-networkd[1648]: cilium_host: Gained IPv6LL Nov 1 00:22:03.777496 systemd-networkd[1648]: cilium_vxlan: Link UP Nov 1 00:22:03.777507 systemd-networkd[1648]: cilium_vxlan: Gained carrier Nov 1 00:22:04.075716 kernel: NET: Registered PF_ALG protocol family Nov 1 00:22:04.305852 systemd-networkd[1648]: cilium_net: Gained IPv6LL Nov 1 00:22:04.818817 systemd-networkd[1648]: cilium_vxlan: Gained IPv6LL Nov 1 00:22:04.926158 systemd-networkd[1648]: lxc_health: Link UP Nov 1 00:22:04.940010 systemd-networkd[1648]: lxc_health: Gained carrier Nov 1 00:22:04.940718 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:22:05.079062 systemd-networkd[1648]: lxc43d0295a2b1e: Link UP Nov 1 00:22:05.086752 kernel: eth0: renamed from tmp468ee Nov 1 00:22:05.092524 systemd-networkd[1648]: lxc488025ac0de1: Link UP Nov 1 00:22:05.113855 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc43d0295a2b1e: link becomes ready Nov 1 00:22:05.113969 kernel: eth0: renamed from tmpa943c Nov 1 00:22:05.114039 systemd-networkd[1648]: lxc43d0295a2b1e: Gained carrier Nov 1 00:22:05.126868 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc488025ac0de1: link becomes ready Nov 1 00:22:05.126671 systemd-networkd[1648]: lxc488025ac0de1: Gained carrier Nov 1 00:22:06.176666 kubelet[2437]: I1101 00:22:06.176605 2437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m4qzh" podStartSLOduration=11.894439794 podStartE2EDuration="19.176589774s" podCreationTimestamp="2025-11-01 00:21:47 +0000 UTC" firstStartedPulling="2025-11-01 00:21:48.27320315 +0000 UTC m=+5.648722821" lastFinishedPulling="2025-11-01 00:21:55.55535313 +0000 UTC m=+12.930872801" observedRunningTime="2025-11-01 00:22:01.890790775 +0000 UTC m=+19.266310446" watchObservedRunningTime="2025-11-01 00:22:06.176589774 +0000 UTC m=+23.552109405" Nov 1 00:22:06.545840 systemd-networkd[1648]: lxc488025ac0de1: Gained IPv6LL Nov 1 00:22:06.609835 systemd-networkd[1648]: lxc_health: Gained IPv6LL Nov 1 00:22:07.057839 systemd-networkd[1648]: lxc43d0295a2b1e: Gained IPv6LL Nov 1 00:22:08.786160 env[1490]: time="2025-11-01T00:22:08.786079663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:08.786544 env[1490]: time="2025-11-01T00:22:08.786516206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:08.786666 env[1490]: time="2025-11-01T00:22:08.786643801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:08.786968 env[1490]: time="2025-11-01T00:22:08.786932510Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a943c83197b6bddb52f8c29cb2ead4b69bc6f849c58a23fb9444bb88641a7bd8 pid=3620 runtime=io.containerd.runc.v2 Nov 1 00:22:08.807160 systemd[1]: run-containerd-runc-k8s.io-a943c83197b6bddb52f8c29cb2ead4b69bc6f849c58a23fb9444bb88641a7bd8-runc.aDxhIa.mount: Deactivated successfully. Nov 1 00:22:08.813024 systemd[1]: Started cri-containerd-a943c83197b6bddb52f8c29cb2ead4b69bc6f849c58a23fb9444bb88641a7bd8.scope. Nov 1 00:22:08.846234 env[1490]: time="2025-11-01T00:22:08.846160155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:08.846408 env[1490]: time="2025-11-01T00:22:08.846203554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:08.846408 env[1490]: time="2025-11-01T00:22:08.846215273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:08.846659 env[1490]: time="2025-11-01T00:22:08.846613698Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/468eebdf81b0b44938cfc2200519a138dc5d0cbdabe1e5d80df70d52dec44b8d pid=3654 runtime=io.containerd.runc.v2 Nov 1 00:22:08.873772 systemd[1]: Started cri-containerd-468eebdf81b0b44938cfc2200519a138dc5d0cbdabe1e5d80df70d52dec44b8d.scope. Nov 1 00:22:08.879354 env[1490]: time="2025-11-01T00:22:08.879317060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2mskw,Uid:9d12e0ec-118d-4916-bb2f-db839ebb2fe2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a943c83197b6bddb52f8c29cb2ead4b69bc6f849c58a23fb9444bb88641a7bd8\"" Nov 1 00:22:08.889433 env[1490]: time="2025-11-01T00:22:08.889392586Z" level=info msg="CreateContainer within sandbox \"a943c83197b6bddb52f8c29cb2ead4b69bc6f849c58a23fb9444bb88641a7bd8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:22:08.918881 env[1490]: time="2025-11-01T00:22:08.918837356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hlkvt,Uid:f93fcfd8-485f-4878-8379-8b0600209f39,Namespace:kube-system,Attempt:0,} returns sandbox id \"468eebdf81b0b44938cfc2200519a138dc5d0cbdabe1e5d80df70d52dec44b8d\"" Nov 1 00:22:08.931139 env[1490]: time="2025-11-01T00:22:08.931084117Z" level=info msg="CreateContainer within sandbox \"468eebdf81b0b44938cfc2200519a138dc5d0cbdabe1e5d80df70d52dec44b8d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:22:08.945598 env[1490]: time="2025-11-01T00:22:08.945549832Z" level=info msg="CreateContainer within sandbox \"a943c83197b6bddb52f8c29cb2ead4b69bc6f849c58a23fb9444bb88641a7bd8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11805c12b7ae6ad98404dce1e2188f968e0f672fb0184712df1f95eaeaa98edc\"" Nov 1 00:22:08.946293 env[1490]: time="2025-11-01T00:22:08.946266644Z" level=info msg="StartContainer for \"11805c12b7ae6ad98404dce1e2188f968e0f672fb0184712df1f95eaeaa98edc\"" Nov 1 00:22:08.970660 systemd[1]: Started cri-containerd-11805c12b7ae6ad98404dce1e2188f968e0f672fb0184712df1f95eaeaa98edc.scope. Nov 1 00:22:08.982547 env[1490]: time="2025-11-01T00:22:08.982494628Z" level=info msg="CreateContainer within sandbox \"468eebdf81b0b44938cfc2200519a138dc5d0cbdabe1e5d80df70d52dec44b8d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0a3dcba31b2f2711fc785e9993ecbe91adc455f373f2ad2f66a51a347119121\"" Nov 1 00:22:08.983323 env[1490]: time="2025-11-01T00:22:08.983273318Z" level=info msg="StartContainer for \"c0a3dcba31b2f2711fc785e9993ecbe91adc455f373f2ad2f66a51a347119121\"" Nov 1 00:22:09.014535 env[1490]: time="2025-11-01T00:22:09.014442030Z" level=info msg="StartContainer for \"11805c12b7ae6ad98404dce1e2188f968e0f672fb0184712df1f95eaeaa98edc\" returns successfully" Nov 1 00:22:09.017348 systemd[1]: Started cri-containerd-c0a3dcba31b2f2711fc785e9993ecbe91adc455f373f2ad2f66a51a347119121.scope. Nov 1 00:22:09.057016 env[1490]: time="2025-11-01T00:22:09.056895562Z" level=info msg="StartContainer for \"c0a3dcba31b2f2711fc785e9993ecbe91adc455f373f2ad2f66a51a347119121\" returns successfully" Nov 1 00:22:09.790887 systemd[1]: run-containerd-runc-k8s.io-468eebdf81b0b44938cfc2200519a138dc5d0cbdabe1e5d80df70d52dec44b8d-runc.vRn8tm.mount: Deactivated successfully. Nov 1 00:22:09.890207 kubelet[2437]: I1101 00:22:09.890136 2437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hlkvt" podStartSLOduration=22.890120206 podStartE2EDuration="22.890120206s" podCreationTimestamp="2025-11-01 00:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:09.889665943 +0000 UTC m=+27.265185614" watchObservedRunningTime="2025-11-01 00:22:09.890120206 +0000 UTC m=+27.265639877" Nov 1 00:22:09.927977 kubelet[2437]: I1101 00:22:09.927882 2437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2mskw" podStartSLOduration=22.927865638 podStartE2EDuration="22.927865638s" podCreationTimestamp="2025-11-01 00:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:09.9267676 +0000 UTC m=+27.302287271" watchObservedRunningTime="2025-11-01 00:22:09.927865638 +0000 UTC m=+27.303385309" Nov 1 00:23:14.355657 systemd[1]: Started sshd@5-10.200.20.48:22-10.200.16.10:46314.service. Nov 1 00:23:14.785251 sshd[3788]: Accepted publickey for core from 10.200.16.10 port 46314 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:23:14.787008 sshd[3788]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:14.791385 systemd[1]: Started session-8.scope. Nov 1 00:23:14.792606 systemd-logind[1467]: New session 8 of user core. Nov 1 00:23:15.208845 sshd[3788]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:15.211404 systemd-logind[1467]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:23:15.211593 systemd[1]: sshd@5-10.200.20.48:22-10.200.16.10:46314.service: Deactivated successfully. Nov 1 00:23:15.212388 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:23:15.213398 systemd-logind[1467]: Removed session 8. Nov 1 00:23:20.284074 systemd[1]: Started sshd@6-10.200.20.48:22-10.200.16.10:43072.service. Nov 1 00:23:20.713485 sshd[3802]: Accepted publickey for core from 10.200.16.10 port 43072 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:23:20.715125 sshd[3802]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:20.719431 systemd[1]: Started session-9.scope. Nov 1 00:23:20.719924 systemd-logind[1467]: New session 9 of user core. Nov 1 00:23:21.091748 sshd[3802]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:21.094059 systemd[1]: sshd@6-10.200.20.48:22-10.200.16.10:43072.service: Deactivated successfully. Nov 1 00:23:21.094829 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:23:21.095374 systemd-logind[1467]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:23:21.096248 systemd-logind[1467]: Removed session 9. Nov 1 00:23:26.173146 systemd[1]: Started sshd@7-10.200.20.48:22-10.200.16.10:43080.service. Nov 1 00:23:26.600525 sshd[3814]: Accepted publickey for core from 10.200.16.10 port 43080 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:23:26.602258 sshd[3814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:26.607039 systemd[1]: Started session-10.scope. Nov 1 00:23:26.607839 systemd-logind[1467]: New session 10 of user core. Nov 1 00:23:26.974867 sshd[3814]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:26.978123 systemd[1]: sshd@7-10.200.20.48:22-10.200.16.10:43080.service: Deactivated successfully. Nov 1 00:23:26.978902 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:23:26.979538 systemd-logind[1467]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:23:26.980395 systemd-logind[1467]: Removed session 10. Nov 1 00:23:32.057907 systemd[1]: Started sshd@8-10.200.20.48:22-10.200.16.10:38760.service. Nov 1 00:23:32.519389 sshd[3826]: Accepted publickey for core from 10.200.16.10 port 38760 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:23:32.520684 sshd[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:32.524504 systemd-logind[1467]: New session 11 of user core. Nov 1 00:23:32.524987 systemd[1]: Started session-11.scope. Nov 1 00:23:32.911873 sshd[3826]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:32.915128 systemd[1]: sshd@8-10.200.20.48:22-10.200.16.10:38760.service: Deactivated successfully. Nov 1 00:23:32.915319 systemd-logind[1467]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:23:32.915880 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:23:32.916630 systemd-logind[1467]: Removed session 11. Nov 1 00:23:32.978412 systemd[1]: Started sshd@9-10.200.20.48:22-10.200.16.10:38774.service. Nov 1 00:23:33.408603 sshd[3839]: Accepted publickey for core from 10.200.16.10 port 38774 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:23:33.409894 sshd[3839]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:33.413801 systemd-logind[1467]: New session 12 of user core. Nov 1 00:23:33.414281 systemd[1]: Started session-12.scope. Nov 1 00:23:33.841434 sshd[3839]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:33.844409 systemd[1]: sshd@9-10.200.20.48:22-10.200.16.10:38774.service: Deactivated successfully. Nov 1 00:23:33.845558 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:23:33.846416 systemd-logind[1467]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:23:33.847402 systemd-logind[1467]: Removed session 12. Nov 1 00:23:33.913980 systemd[1]: Started sshd@10-10.200.20.48:22-10.200.16.10:38782.service. Nov 1 00:23:34.343488 sshd[3849]: Accepted publickey for core from 10.200.16.10 port 38782 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:23:34.344826 sshd[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:34.349285 systemd[1]: Started session-13.scope. Nov 1 00:23:34.349586 systemd-logind[1467]: New session 13 of user core. Nov 1 00:23:34.716864 sshd[3849]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:34.720093 systemd[1]: sshd@10-10.200.20.48:22-10.200.16.10:38782.service: Deactivated successfully. Nov 1 00:23:34.720631 systemd-logind[1467]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:23:34.720834 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:23:34.721622 systemd-logind[1467]: Removed session 13. Nov 1 00:23:39.789600 systemd[1]: Started sshd@11-10.200.20.48:22-10.200.16.10:38790.service. Nov 1 00:23:40.224020 sshd[3861]: Accepted publickey for core from 10.200.16.10 port 38790 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:23:40.224856 sshd[3861]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:40.229522 systemd[1]: Started session-14.scope. Nov 1 00:23:40.229667 systemd-logind[1467]: New session 14 of user core. Nov 1 00:23:40.602574 sshd[3861]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:40.605157 systemd[1]: sshd@11-10.200.20.48:22-10.200.16.10:38790.service: Deactivated successfully. Nov 1 00:23:40.605966 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:23:40.606567 systemd-logind[1467]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:23:40.607437 systemd-logind[1467]: Removed session 14. Nov 1 00:23:45.683339 systemd[1]: Started sshd@12-10.200.20.48:22-10.200.16.10:42708.service. Nov 1 00:23:46.139936 sshd[3874]: Accepted publickey for core from 10.200.16.10 port 42708 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:23:46.141574 sshd[3874]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:46.145377 systemd-logind[1467]: New session 15 of user core. Nov 1 00:23:46.145889 systemd[1]: Started session-15.scope. Nov 1 00:23:46.553164 sshd[3874]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:46.555901 systemd-logind[1467]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:23:46.556097 systemd[1]: sshd@12-10.200.20.48:22-10.200.16.10:42708.service: Deactivated successfully. Nov 1 00:23:46.556846 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:23:46.557549 systemd-logind[1467]: Removed session 15. Nov 1 00:23:46.629357 systemd[1]: Started sshd@13-10.200.20.48:22-10.200.16.10:42710.service. Nov 1 00:23:47.088040 sshd[3887]: Accepted publickey for core from 10.200.16.10 port 42710 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:23:47.089354 sshd[3887]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:47.093761 systemd-logind[1467]: New session 16 of user core. Nov 1 00:23:47.093860 systemd[1]: Started session-16.scope. Nov 1 00:23:47.527938 sshd[3887]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:47.530637 systemd-logind[1467]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:23:47.531305 systemd[1]: sshd@13-10.200.20.48:22-10.200.16.10:42710.service: Deactivated successfully. Nov 1 00:23:47.532067 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:23:47.532533 systemd-logind[1467]: Removed session 16. Nov 1 00:23:47.594429 systemd[1]: Started sshd@14-10.200.20.48:22-10.200.16.10:42712.service. Nov 1 00:23:48.025341 sshd[3897]: Accepted publickey for core from 10.200.16.10 port 42712 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:23:48.027163 sshd[3897]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:48.031730 systemd[1]: Started session-17.scope. Nov 1 00:23:48.032055 systemd-logind[1467]: New session 17 of user core. Nov 1 00:23:48.972250 sshd[3897]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:48.975188 systemd-logind[1467]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:23:48.975377 systemd[1]: sshd@14-10.200.20.48:22-10.200.16.10:42712.service: Deactivated successfully. Nov 1 00:23:48.976128 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:23:48.976945 systemd-logind[1467]: Removed session 17. Nov 1 00:23:49.052980 systemd[1]: Started sshd@15-10.200.20.48:22-10.200.16.10:42726.service. Nov 1 00:23:49.518586 sshd[3914]: Accepted publickey for core from 10.200.16.10 port 42726 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:23:49.519921 sshd[3914]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:49.524563 systemd[1]: Started session-18.scope. Nov 1 00:23:49.525602 systemd-logind[1467]: New session 18 of user core. Nov 1 00:23:50.025548 sshd[3914]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:50.028581 systemd[1]: sshd@15-10.200.20.48:22-10.200.16.10:42726.service: Deactivated successfully. Nov 1 00:23:50.029379 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:23:50.029941 systemd-logind[1467]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:23:50.030947 systemd-logind[1467]: Removed session 18. Nov 1 00:23:50.112969 systemd[1]: Started sshd@16-10.200.20.48:22-10.200.16.10:45826.service. Nov 1 00:23:50.541115 sshd[3926]: Accepted publickey for core from 10.200.16.10 port 45826 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:23:50.542907 sshd[3926]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:50.547337 systemd[1]: Started session-19.scope. Nov 1 00:23:50.547834 systemd-logind[1467]: New session 19 of user core. Nov 1 00:23:50.916646 sshd[3926]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:50.919626 systemd[1]: sshd@16-10.200.20.48:22-10.200.16.10:45826.service: Deactivated successfully. Nov 1 00:23:50.920434 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:23:50.921043 systemd-logind[1467]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:23:50.921969 systemd-logind[1467]: Removed session 19. Nov 1 00:23:55.984774 systemd[1]: Started sshd@17-10.200.20.48:22-10.200.16.10:45828.service. Nov 1 00:23:56.401630 sshd[3939]: Accepted publickey for core from 10.200.16.10 port 45828 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:23:56.403320 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:56.407707 systemd[1]: Started session-20.scope. Nov 1 00:23:56.408798 systemd-logind[1467]: New session 20 of user core. Nov 1 00:23:56.763853 sshd[3939]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:56.766811 systemd-logind[1467]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:23:56.766823 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:23:56.767460 systemd[1]: sshd@17-10.200.20.48:22-10.200.16.10:45828.service: Deactivated successfully. Nov 1 00:23:56.768560 systemd-logind[1467]: Removed session 20. Nov 1 00:24:01.838337 systemd[1]: Started sshd@18-10.200.20.48:22-10.200.16.10:51432.service. Nov 1 00:24:02.271816 sshd[3951]: Accepted publickey for core from 10.200.16.10 port 51432 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:24:02.273185 sshd[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:02.278040 systemd[1]: Started session-21.scope. Nov 1 00:24:02.278354 systemd-logind[1467]: New session 21 of user core. Nov 1 00:24:02.645908 sshd[3951]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:02.649133 systemd[1]: sshd@18-10.200.20.48:22-10.200.16.10:51432.service: Deactivated successfully. Nov 1 00:24:02.649325 systemd-logind[1467]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:24:02.649865 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:24:02.650630 systemd-logind[1467]: Removed session 21. Nov 1 00:24:02.717205 systemd[1]: Started sshd@19-10.200.20.48:22-10.200.16.10:51442.service. Nov 1 00:24:03.152891 sshd[3963]: Accepted publickey for core from 10.200.16.10 port 51442 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:24:03.154156 sshd[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:03.158560 systemd[1]: Started session-22.scope. Nov 1 00:24:03.159024 systemd-logind[1467]: New session 22 of user core. Nov 1 00:24:05.237477 systemd[1]: run-containerd-runc-k8s.io-83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a-runc.3VUiGO.mount: Deactivated successfully. Nov 1 00:24:05.246082 env[1490]: time="2025-11-01T00:24:05.246039955Z" level=info msg="StopContainer for \"cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530\" with timeout 30 (s)" Nov 1 00:24:05.249012 env[1490]: time="2025-11-01T00:24:05.248966879Z" level=info msg="Stop container \"cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530\" with signal terminated" Nov 1 00:24:05.264927 env[1490]: time="2025-11-01T00:24:05.264861362Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:24:05.266722 systemd[1]: cri-containerd-cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530.scope: Deactivated successfully. Nov 1 00:24:05.288563 env[1490]: time="2025-11-01T00:24:05.288524524Z" level=info msg="StopContainer for \"83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a\" with timeout 2 (s)" Nov 1 00:24:05.289124 env[1490]: time="2025-11-01T00:24:05.289093172Z" level=info msg="Stop container \"83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a\" with signal terminated" Nov 1 00:24:05.304105 systemd-networkd[1648]: lxc_health: Link DOWN Nov 1 00:24:05.304112 systemd-networkd[1648]: lxc_health: Lost carrier Nov 1 00:24:05.307118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530-rootfs.mount: Deactivated successfully. Nov 1 00:24:05.330581 systemd[1]: cri-containerd-83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a.scope: Deactivated successfully. Nov 1 00:24:05.330937 systemd[1]: cri-containerd-83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a.scope: Consumed 6.271s CPU time. Nov 1 00:24:05.347568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a-rootfs.mount: Deactivated successfully. Nov 1 00:24:05.386092 env[1490]: time="2025-11-01T00:24:05.386040253Z" level=info msg="shim disconnected" id=cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530 Nov 1 00:24:05.386092 env[1490]: time="2025-11-01T00:24:05.386085014Z" level=warning msg="cleaning up after shim disconnected" id=cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530 namespace=k8s.io Nov 1 00:24:05.386092 env[1490]: time="2025-11-01T00:24:05.386094174Z" level=info msg="cleaning up dead shim" Nov 1 00:24:05.386513 env[1490]: time="2025-11-01T00:24:05.386475420Z" level=info msg="shim disconnected" id=83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a Nov 1 00:24:05.386566 env[1490]: time="2025-11-01T00:24:05.386513620Z" level=warning msg="cleaning up after shim disconnected" id=83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a namespace=k8s.io Nov 1 00:24:05.386566 env[1490]: time="2025-11-01T00:24:05.386521700Z" level=info msg="cleaning up dead shim" Nov 1 00:24:05.393623 env[1490]: time="2025-11-01T00:24:05.393567168Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4034 runtime=io.containerd.runc.v2\n" Nov 1 00:24:05.394164 env[1490]: time="2025-11-01T00:24:05.394133496Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4033 runtime=io.containerd.runc.v2\n" Nov 1 00:24:05.401111 env[1490]: time="2025-11-01T00:24:05.401064242Z" level=info msg="StopContainer for \"cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530\" returns successfully" Nov 1 00:24:05.401737 env[1490]: time="2025-11-01T00:24:05.401709772Z" level=info msg="StopPodSandbox for \"be4f11fb042f2b455092deb5d414b483a0bbe0a028864106a7739d280ab10827\"" Nov 1 00:24:05.401889 env[1490]: time="2025-11-01T00:24:05.401869855Z" level=info msg="Container to stop \"cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:24:05.403936 env[1490]: time="2025-11-01T00:24:05.403897326Z" level=info msg="StopContainer for \"83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a\" returns successfully" Nov 1 00:24:05.404489 env[1490]: time="2025-11-01T00:24:05.404465854Z" level=info msg="StopPodSandbox for \"379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e\"" Nov 1 00:24:05.404626 env[1490]: time="2025-11-01T00:24:05.404605256Z" level=info msg="Container to stop \"3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:24:05.404713 env[1490]: time="2025-11-01T00:24:05.404677218Z" level=info msg="Container to stop \"518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:24:05.404785 env[1490]: time="2025-11-01T00:24:05.404767299Z" level=info msg="Container to stop \"83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:24:05.404846 env[1490]: time="2025-11-01T00:24:05.404828620Z" level=info msg="Container to stop \"ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:24:05.404904 env[1490]: time="2025-11-01T00:24:05.404889821Z" level=info msg="Container to stop \"1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:24:05.408341 systemd[1]: cri-containerd-be4f11fb042f2b455092deb5d414b483a0bbe0a028864106a7739d280ab10827.scope: Deactivated successfully. Nov 1 00:24:05.411464 systemd[1]: cri-containerd-379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e.scope: Deactivated successfully. Nov 1 00:24:05.445588 env[1490]: time="2025-11-01T00:24:05.445537202Z" level=info msg="shim disconnected" id=be4f11fb042f2b455092deb5d414b483a0bbe0a028864106a7739d280ab10827 Nov 1 00:24:05.445866 env[1490]: time="2025-11-01T00:24:05.445848846Z" level=warning msg="cleaning up after shim disconnected" id=be4f11fb042f2b455092deb5d414b483a0bbe0a028864106a7739d280ab10827 namespace=k8s.io Nov 1 00:24:05.445933 env[1490]: time="2025-11-01T00:24:05.445920007Z" level=info msg="cleaning up dead shim" Nov 1 00:24:05.446942 env[1490]: time="2025-11-01T00:24:05.446889342Z" level=info msg="shim disconnected" id=379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e Nov 1 00:24:05.446942 env[1490]: time="2025-11-01T00:24:05.446937263Z" level=warning msg="cleaning up after shim disconnected" id=379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e namespace=k8s.io Nov 1 00:24:05.446942 env[1490]: time="2025-11-01T00:24:05.446946023Z" level=info msg="cleaning up dead shim" Nov 1 00:24:05.455468 env[1490]: time="2025-11-01T00:24:05.455419312Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4099 runtime=io.containerd.runc.v2\n" Nov 1 00:24:05.455948 env[1490]: time="2025-11-01T00:24:05.455920320Z" level=info msg="TearDown network for sandbox \"be4f11fb042f2b455092deb5d414b483a0bbe0a028864106a7739d280ab10827\" successfully" Nov 1 00:24:05.456051 env[1490]: time="2025-11-01T00:24:05.456035202Z" level=info msg="StopPodSandbox for \"be4f11fb042f2b455092deb5d414b483a0bbe0a028864106a7739d280ab10827\" returns successfully" Nov 1 00:24:05.459279 env[1490]: time="2025-11-01T00:24:05.459249691Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4100 runtime=io.containerd.runc.v2\n" Nov 1 00:24:05.459930 env[1490]: time="2025-11-01T00:24:05.459906341Z" level=info msg="TearDown network for sandbox \"379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e\" successfully" Nov 1 00:24:05.460035 env[1490]: time="2025-11-01T00:24:05.460017623Z" level=info msg="StopPodSandbox for \"379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e\" returns successfully" Nov 1 00:24:05.593568 kubelet[2437]: I1101 00:24:05.593522 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4f659f8-0a56-40fc-8345-105772c07b52-cilium-config-path\") pod \"e4f659f8-0a56-40fc-8345-105772c07b52\" (UID: \"e4f659f8-0a56-40fc-8345-105772c07b52\") " Nov 1 00:24:05.593568 kubelet[2437]: I1101 00:24:05.593573 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-lib-modules\") pod \"9e0469e6-04e0-472c-8850-4d766fdef3e0\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " Nov 1 00:24:05.593976 kubelet[2437]: I1101 00:24:05.593589 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-cilium-cgroup\") pod \"9e0469e6-04e0-472c-8850-4d766fdef3e0\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " Nov 1 00:24:05.593976 kubelet[2437]: I1101 00:24:05.593602 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-etc-cni-netd\") pod \"9e0469e6-04e0-472c-8850-4d766fdef3e0\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " Nov 1 00:24:05.593976 kubelet[2437]: I1101 00:24:05.593623 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e0469e6-04e0-472c-8850-4d766fdef3e0-hubble-tls\") pod \"9e0469e6-04e0-472c-8850-4d766fdef3e0\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " Nov 1 00:24:05.593976 kubelet[2437]: I1101 00:24:05.593639 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llzrd\" (UniqueName: \"kubernetes.io/projected/e4f659f8-0a56-40fc-8345-105772c07b52-kube-api-access-llzrd\") pod \"e4f659f8-0a56-40fc-8345-105772c07b52\" (UID: \"e4f659f8-0a56-40fc-8345-105772c07b52\") " Nov 1 00:24:05.593976 kubelet[2437]: I1101 00:24:05.593657 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ddxv\" (UniqueName: \"kubernetes.io/projected/9e0469e6-04e0-472c-8850-4d766fdef3e0-kube-api-access-4ddxv\") pod \"9e0469e6-04e0-472c-8850-4d766fdef3e0\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " Nov 1 00:24:05.593976 kubelet[2437]: I1101 00:24:05.593672 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e0469e6-04e0-472c-8850-4d766fdef3e0-clustermesh-secrets\") pod \"9e0469e6-04e0-472c-8850-4d766fdef3e0\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " Nov 1 00:24:05.594117 kubelet[2437]: I1101 00:24:05.593704 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-cni-path\") pod \"9e0469e6-04e0-472c-8850-4d766fdef3e0\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " Nov 1 00:24:05.594117 kubelet[2437]: I1101 00:24:05.593719 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-hostproc\") pod \"9e0469e6-04e0-472c-8850-4d766fdef3e0\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " Nov 1 00:24:05.594117 kubelet[2437]: I1101 00:24:05.593741 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e0469e6-04e0-472c-8850-4d766fdef3e0-cilium-config-path\") pod \"9e0469e6-04e0-472c-8850-4d766fdef3e0\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " Nov 1 00:24:05.594117 kubelet[2437]: I1101 00:24:05.593754 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-host-proc-sys-kernel\") pod \"9e0469e6-04e0-472c-8850-4d766fdef3e0\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " Nov 1 00:24:05.594117 kubelet[2437]: I1101 00:24:05.593767 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-bpf-maps\") pod \"9e0469e6-04e0-472c-8850-4d766fdef3e0\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " Nov 1 00:24:05.594117 kubelet[2437]: I1101 00:24:05.593780 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-xtables-lock\") pod \"9e0469e6-04e0-472c-8850-4d766fdef3e0\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " Nov 1 00:24:05.594288 kubelet[2437]: I1101 00:24:05.593794 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-host-proc-sys-net\") pod \"9e0469e6-04e0-472c-8850-4d766fdef3e0\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " Nov 1 00:24:05.594288 kubelet[2437]: I1101 00:24:05.593808 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-cilium-run\") pod \"9e0469e6-04e0-472c-8850-4d766fdef3e0\" (UID: \"9e0469e6-04e0-472c-8850-4d766fdef3e0\") " Nov 1 00:24:05.594288 kubelet[2437]: I1101 00:24:05.593874 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9e0469e6-04e0-472c-8850-4d766fdef3e0" (UID: "9e0469e6-04e0-472c-8850-4d766fdef3e0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:05.595723 kubelet[2437]: I1101 00:24:05.595676 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4f659f8-0a56-40fc-8345-105772c07b52-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e4f659f8-0a56-40fc-8345-105772c07b52" (UID: "e4f659f8-0a56-40fc-8345-105772c07b52"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:24:05.595793 kubelet[2437]: I1101 00:24:05.595753 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9e0469e6-04e0-472c-8850-4d766fdef3e0" (UID: "9e0469e6-04e0-472c-8850-4d766fdef3e0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:05.595793 kubelet[2437]: I1101 00:24:05.595769 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9e0469e6-04e0-472c-8850-4d766fdef3e0" (UID: "9e0469e6-04e0-472c-8850-4d766fdef3e0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:05.595793 kubelet[2437]: I1101 00:24:05.595783 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9e0469e6-04e0-472c-8850-4d766fdef3e0" (UID: "9e0469e6-04e0-472c-8850-4d766fdef3e0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:05.596050 kubelet[2437]: I1101 00:24:05.596028 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-hostproc" (OuterVolumeSpecName: "hostproc") pod "9e0469e6-04e0-472c-8850-4d766fdef3e0" (UID: "9e0469e6-04e0-472c-8850-4d766fdef3e0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:05.596977 kubelet[2437]: I1101 00:24:05.596939 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9e0469e6-04e0-472c-8850-4d766fdef3e0" (UID: "9e0469e6-04e0-472c-8850-4d766fdef3e0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:05.596977 kubelet[2437]: I1101 00:24:05.596979 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9e0469e6-04e0-472c-8850-4d766fdef3e0" (UID: "9e0469e6-04e0-472c-8850-4d766fdef3e0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:05.597094 kubelet[2437]: I1101 00:24:05.596995 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9e0469e6-04e0-472c-8850-4d766fdef3e0" (UID: "9e0469e6-04e0-472c-8850-4d766fdef3e0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:05.597094 kubelet[2437]: I1101 00:24:05.597009 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9e0469e6-04e0-472c-8850-4d766fdef3e0" (UID: "9e0469e6-04e0-472c-8850-4d766fdef3e0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:05.598673 kubelet[2437]: I1101 00:24:05.598639 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e0469e6-04e0-472c-8850-4d766fdef3e0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9e0469e6-04e0-472c-8850-4d766fdef3e0" (UID: "9e0469e6-04e0-472c-8850-4d766fdef3e0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:24:05.599921 kubelet[2437]: I1101 00:24:05.599870 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-cni-path" (OuterVolumeSpecName: "cni-path") pod "9e0469e6-04e0-472c-8850-4d766fdef3e0" (UID: "9e0469e6-04e0-472c-8850-4d766fdef3e0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:05.600001 kubelet[2437]: I1101 00:24:05.599988 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e0469e6-04e0-472c-8850-4d766fdef3e0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9e0469e6-04e0-472c-8850-4d766fdef3e0" (UID: "9e0469e6-04e0-472c-8850-4d766fdef3e0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:24:05.600923 kubelet[2437]: I1101 00:24:05.600883 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4f659f8-0a56-40fc-8345-105772c07b52-kube-api-access-llzrd" (OuterVolumeSpecName: "kube-api-access-llzrd") pod "e4f659f8-0a56-40fc-8345-105772c07b52" (UID: "e4f659f8-0a56-40fc-8345-105772c07b52"). InnerVolumeSpecName "kube-api-access-llzrd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:24:05.602406 kubelet[2437]: I1101 00:24:05.602381 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e0469e6-04e0-472c-8850-4d766fdef3e0-kube-api-access-4ddxv" (OuterVolumeSpecName: "kube-api-access-4ddxv") pod "9e0469e6-04e0-472c-8850-4d766fdef3e0" (UID: "9e0469e6-04e0-472c-8850-4d766fdef3e0"). InnerVolumeSpecName "kube-api-access-4ddxv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:24:05.602935 kubelet[2437]: I1101 00:24:05.602909 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0469e6-04e0-472c-8850-4d766fdef3e0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9e0469e6-04e0-472c-8850-4d766fdef3e0" (UID: "9e0469e6-04e0-472c-8850-4d766fdef3e0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:24:05.694001 kubelet[2437]: I1101 00:24:05.693965 2437 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4ddxv\" (UniqueName: \"kubernetes.io/projected/9e0469e6-04e0-472c-8850-4d766fdef3e0-kube-api-access-4ddxv\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:05.694201 kubelet[2437]: I1101 00:24:05.694189 2437 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e0469e6-04e0-472c-8850-4d766fdef3e0-clustermesh-secrets\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:05.694272 kubelet[2437]: I1101 00:24:05.694262 2437 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-cni-path\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:05.694333 kubelet[2437]: I1101 00:24:05.694324 2437 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-hostproc\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:05.694396 kubelet[2437]: I1101 00:24:05.694379 2437 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e0469e6-04e0-472c-8850-4d766fdef3e0-cilium-config-path\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:05.694455 kubelet[2437]: I1101 00:24:05.694445 2437 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:05.694524 kubelet[2437]: I1101 00:24:05.694514 2437 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-bpf-maps\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:05.694582 kubelet[2437]: I1101 00:24:05.694573 2437 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-xtables-lock\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:05.694645 kubelet[2437]: I1101 00:24:05.694630 2437 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-host-proc-sys-net\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:05.694728 kubelet[2437]: I1101 00:24:05.694717 2437 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-cilium-run\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:05.694803 kubelet[2437]: I1101 00:24:05.694793 2437 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4f659f8-0a56-40fc-8345-105772c07b52-cilium-config-path\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:05.694873 kubelet[2437]: I1101 00:24:05.694861 2437 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-lib-modules\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:05.694932 kubelet[2437]: I1101 00:24:05.694923 2437 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-cilium-cgroup\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:05.694993 kubelet[2437]: I1101 00:24:05.694977 2437 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e0469e6-04e0-472c-8850-4d766fdef3e0-etc-cni-netd\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:05.695051 kubelet[2437]: I1101 00:24:05.695042 2437 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e0469e6-04e0-472c-8850-4d766fdef3e0-hubble-tls\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:05.695156 kubelet[2437]: I1101 00:24:05.695144 2437 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-llzrd\" (UniqueName: \"kubernetes.io/projected/e4f659f8-0a56-40fc-8345-105772c07b52-kube-api-access-llzrd\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:06.094242 kubelet[2437]: I1101 00:24:06.094199 2437 scope.go:117] "RemoveContainer" containerID="cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530" Nov 1 00:24:06.095237 systemd[1]: Removed slice kubepods-besteffort-pode4f659f8_0a56_40fc_8345_105772c07b52.slice. Nov 1 00:24:06.099741 env[1490]: time="2025-11-01T00:24:06.099675233Z" level=info msg="RemoveContainer for \"cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530\"" Nov 1 00:24:06.105059 systemd[1]: Removed slice kubepods-burstable-pod9e0469e6_04e0_472c_8850_4d766fdef3e0.slice. Nov 1 00:24:06.105141 systemd[1]: kubepods-burstable-pod9e0469e6_04e0_472c_8850_4d766fdef3e0.slice: Consumed 6.363s CPU time. Nov 1 00:24:06.113172 env[1490]: time="2025-11-01T00:24:06.113124634Z" level=info msg="RemoveContainer for \"cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530\" returns successfully" Nov 1 00:24:06.113481 kubelet[2437]: I1101 00:24:06.113458 2437 scope.go:117] "RemoveContainer" containerID="cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530" Nov 1 00:24:06.113857 env[1490]: time="2025-11-01T00:24:06.113787283Z" level=error msg="ContainerStatus for \"cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530\": not found" Nov 1 00:24:06.114031 kubelet[2437]: E1101 00:24:06.114011 2437 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530\": not found" containerID="cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530" Nov 1 00:24:06.114138 kubelet[2437]: I1101 00:24:06.114102 2437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530"} err="failed to get container status \"cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530\": rpc error: code = NotFound desc = an error occurred when try to find container \"cfe971be655784c1a4d46643e9eca7ea6c3dca58a7eb59277d9346db0b16c530\": not found" Nov 1 00:24:06.114208 kubelet[2437]: I1101 00:24:06.114198 2437 scope.go:117] "RemoveContainer" containerID="83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a" Nov 1 00:24:06.115196 env[1490]: time="2025-11-01T00:24:06.115166224Z" level=info msg="RemoveContainer for \"83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a\"" Nov 1 00:24:06.130934 env[1490]: time="2025-11-01T00:24:06.130820097Z" level=info msg="RemoveContainer for \"83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a\" returns successfully" Nov 1 00:24:06.131308 kubelet[2437]: I1101 00:24:06.131270 2437 scope.go:117] "RemoveContainer" containerID="518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf" Nov 1 00:24:06.132440 env[1490]: time="2025-11-01T00:24:06.132409041Z" level=info msg="RemoveContainer for \"518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf\"" Nov 1 00:24:06.143566 env[1490]: time="2025-11-01T00:24:06.143409524Z" level=info msg="RemoveContainer for \"518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf\" returns successfully" Nov 1 00:24:06.144071 kubelet[2437]: I1101 00:24:06.144049 2437 scope.go:117] "RemoveContainer" containerID="1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614" Nov 1 00:24:06.147747 env[1490]: time="2025-11-01T00:24:06.147363143Z" level=info msg="RemoveContainer for \"1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614\"" Nov 1 00:24:06.160439 env[1490]: time="2025-11-01T00:24:06.160388257Z" level=info msg="RemoveContainer for \"1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614\" returns successfully" Nov 1 00:24:06.160882 kubelet[2437]: I1101 00:24:06.160854 2437 scope.go:117] "RemoveContainer" containerID="ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f" Nov 1 00:24:06.163298 env[1490]: time="2025-11-01T00:24:06.163255859Z" level=info msg="RemoveContainer for \"ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f\"" Nov 1 00:24:06.174966 env[1490]: time="2025-11-01T00:24:06.174900313Z" level=info msg="RemoveContainer for \"ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f\" returns successfully" Nov 1 00:24:06.177482 kubelet[2437]: I1101 00:24:06.177455 2437 scope.go:117] "RemoveContainer" containerID="3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10" Nov 1 00:24:06.183000 env[1490]: time="2025-11-01T00:24:06.182961753Z" level=info msg="RemoveContainer for \"3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10\"" Nov 1 00:24:06.195017 env[1490]: time="2025-11-01T00:24:06.194960851Z" level=info msg="RemoveContainer for \"3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10\" returns successfully" Nov 1 00:24:06.195468 kubelet[2437]: I1101 00:24:06.195447 2437 scope.go:117] "RemoveContainer" containerID="83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a" Nov 1 00:24:06.195947 env[1490]: time="2025-11-01T00:24:06.195874065Z" level=error msg="ContainerStatus for \"83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a\": not found" Nov 1 00:24:06.196192 kubelet[2437]: E1101 00:24:06.196159 2437 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a\": not found" containerID="83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a" Nov 1 00:24:06.196307 kubelet[2437]: I1101 00:24:06.196285 2437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a"} err="failed to get container status \"83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a\": rpc error: code = NotFound desc = an error occurred when try to find container \"83059ccd54413cf303ae3d834d068aad57abe015e467178421a499c0db9ccd5a\": not found" Nov 1 00:24:06.196400 kubelet[2437]: I1101 00:24:06.196389 2437 scope.go:117] "RemoveContainer" containerID="518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf" Nov 1 00:24:06.196752 env[1490]: time="2025-11-01T00:24:06.196706637Z" level=error msg="ContainerStatus for \"518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf\": not found" Nov 1 00:24:06.198473 kubelet[2437]: E1101 00:24:06.198447 2437 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf\": not found" containerID="518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf" Nov 1 00:24:06.198960 kubelet[2437]: I1101 00:24:06.198928 2437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf"} err="failed to get container status \"518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf\": rpc error: code = NotFound desc = an error occurred when try to find container \"518787dff5d87c389692381a4b3016344dc6c9cba39c178e2655059cb5749cdf\": not found" Nov 1 00:24:06.199029 kubelet[2437]: I1101 00:24:06.198963 2437 scope.go:117] "RemoveContainer" containerID="1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614" Nov 1 00:24:06.201908 env[1490]: time="2025-11-01T00:24:06.201825273Z" level=error msg="ContainerStatus for \"1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614\": not found" Nov 1 00:24:06.203962 kubelet[2437]: E1101 00:24:06.203925 2437 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614\": not found" containerID="1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614" Nov 1 00:24:06.204056 kubelet[2437]: I1101 00:24:06.203975 2437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614"} err="failed to get container status \"1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f3915ce8a96b907e364c10a74bc945111400e92f48e3ef3acc546c81d9b0614\": not found" Nov 1 00:24:06.204056 kubelet[2437]: I1101 00:24:06.203996 2437 scope.go:117] "RemoveContainer" containerID="ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f" Nov 1 00:24:06.205458 env[1490]: time="2025-11-01T00:24:06.205380046Z" level=error msg="ContainerStatus for \"ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f\": not found" Nov 1 00:24:06.205652 kubelet[2437]: E1101 00:24:06.205608 2437 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f\": not found" containerID="ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f" Nov 1 00:24:06.205718 kubelet[2437]: I1101 00:24:06.205652 2437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f"} err="failed to get container status \"ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba42b9af6295dab3b49b092853e8678b0ed6cdbb8326d673627c81238bd55b8f\": not found" Nov 1 00:24:06.205718 kubelet[2437]: I1101 00:24:06.205673 2437 scope.go:117] "RemoveContainer" containerID="3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10" Nov 1 00:24:06.206014 env[1490]: time="2025-11-01T00:24:06.205917534Z" level=error msg="ContainerStatus for \"3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10\": not found" Nov 1 00:24:06.206118 kubelet[2437]: E1101 00:24:06.206090 2437 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10\": not found" containerID="3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10" Nov 1 00:24:06.206169 kubelet[2437]: I1101 00:24:06.206116 2437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10"} err="failed to get container status \"3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c60ed3c8340d66561a35eea314c37f4180338bc9394236a91d1a75bec681b10\": not found" Nov 1 00:24:06.231329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be4f11fb042f2b455092deb5d414b483a0bbe0a028864106a7739d280ab10827-rootfs.mount: Deactivated successfully. Nov 1 00:24:06.231415 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be4f11fb042f2b455092deb5d414b483a0bbe0a028864106a7739d280ab10827-shm.mount: Deactivated successfully. Nov 1 00:24:06.231476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e-rootfs.mount: Deactivated successfully. Nov 1 00:24:06.231525 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-379ae9954dda6b1b58501c43b85e242de28736ac3aa59bf93ebd330d698dda4e-shm.mount: Deactivated successfully. Nov 1 00:24:06.231570 systemd[1]: var-lib-kubelet-pods-e4f659f8\x2d0a56\x2d40fc\x2d8345\x2d105772c07b52-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dllzrd.mount: Deactivated successfully. Nov 1 00:24:06.231621 systemd[1]: var-lib-kubelet-pods-9e0469e6\x2d04e0\x2d472c\x2d8850\x2d4d766fdef3e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4ddxv.mount: Deactivated successfully. Nov 1 00:24:06.231669 systemd[1]: var-lib-kubelet-pods-9e0469e6\x2d04e0\x2d472c\x2d8850\x2d4d766fdef3e0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:24:06.231737 systemd[1]: var-lib-kubelet-pods-9e0469e6\x2d04e0\x2d472c\x2d8850\x2d4d766fdef3e0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:24:06.755503 kubelet[2437]: I1101 00:24:06.755431 2437 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e0469e6-04e0-472c-8850-4d766fdef3e0" path="/var/lib/kubelet/pods/9e0469e6-04e0-472c-8850-4d766fdef3e0/volumes" Nov 1 00:24:06.756498 kubelet[2437]: I1101 00:24:06.756473 2437 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4f659f8-0a56-40fc-8345-105772c07b52" path="/var/lib/kubelet/pods/e4f659f8-0a56-40fc-8345-105772c07b52/volumes" Nov 1 00:24:07.231269 sshd[3963]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:07.234232 systemd[1]: sshd@19-10.200.20.48:22-10.200.16.10:51442.service: Deactivated successfully. Nov 1 00:24:07.234992 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:24:07.235149 systemd[1]: session-22.scope: Consumed 1.174s CPU time. Nov 1 00:24:07.235518 systemd-logind[1467]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:24:07.236347 systemd-logind[1467]: Removed session 22. Nov 1 00:24:07.307403 systemd[1]: Started sshd@20-10.200.20.48:22-10.200.16.10:51456.service. Nov 1 00:24:07.738818 sshd[4132]: Accepted publickey for core from 10.200.16.10 port 51456 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:24:07.740138 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:07.744784 systemd[1]: Started session-23.scope. Nov 1 00:24:07.745109 systemd-logind[1467]: New session 23 of user core. Nov 1 00:24:07.866128 kubelet[2437]: E1101 00:24:07.866084 2437 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:24:08.939897 systemd[1]: Created slice kubepods-burstable-pod945a013b_d830_464c_bfe8_c4705221a958.slice. Nov 1 00:24:08.968322 sshd[4132]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:08.971657 systemd[1]: sshd@20-10.200.20.48:22-10.200.16.10:51456.service: Deactivated successfully. Nov 1 00:24:08.972385 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:24:08.972836 systemd-logind[1467]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:24:08.973645 systemd-logind[1467]: Removed session 23. Nov 1 00:24:09.016170 kubelet[2437]: I1101 00:24:09.016121 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/945a013b-d830-464c-bfe8-c4705221a958-clustermesh-secrets\") pod \"cilium-4dv9d\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " pod="kube-system/cilium-4dv9d" Nov 1 00:24:09.016170 kubelet[2437]: I1101 00:24:09.016166 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/945a013b-d830-464c-bfe8-c4705221a958-cilium-config-path\") pod \"cilium-4dv9d\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " pod="kube-system/cilium-4dv9d" Nov 1 00:24:09.016540 kubelet[2437]: I1101 00:24:09.016183 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-etc-cni-netd\") pod \"cilium-4dv9d\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " pod="kube-system/cilium-4dv9d" Nov 1 00:24:09.016540 kubelet[2437]: I1101 00:24:09.016200 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/945a013b-d830-464c-bfe8-c4705221a958-cilium-ipsec-secrets\") pod \"cilium-4dv9d\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " pod="kube-system/cilium-4dv9d" Nov 1 00:24:09.016540 kubelet[2437]: I1101 00:24:09.016216 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-bpf-maps\") pod \"cilium-4dv9d\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " pod="kube-system/cilium-4dv9d" Nov 1 00:24:09.016540 kubelet[2437]: I1101 00:24:09.016233 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-cilium-cgroup\") pod \"cilium-4dv9d\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " pod="kube-system/cilium-4dv9d" Nov 1 00:24:09.016540 kubelet[2437]: I1101 00:24:09.016251 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-xtables-lock\") pod \"cilium-4dv9d\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " pod="kube-system/cilium-4dv9d" Nov 1 00:24:09.016540 kubelet[2437]: I1101 00:24:09.016268 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mql4c\" (UniqueName: \"kubernetes.io/projected/945a013b-d830-464c-bfe8-c4705221a958-kube-api-access-mql4c\") pod \"cilium-4dv9d\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " pod="kube-system/cilium-4dv9d" Nov 1 00:24:09.016723 kubelet[2437]: I1101 00:24:09.016283 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-hostproc\") pod \"cilium-4dv9d\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " pod="kube-system/cilium-4dv9d" Nov 1 00:24:09.016723 kubelet[2437]: I1101 00:24:09.016299 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-cilium-run\") pod \"cilium-4dv9d\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " pod="kube-system/cilium-4dv9d" Nov 1 00:24:09.016723 kubelet[2437]: I1101 00:24:09.016314 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-host-proc-sys-net\") pod \"cilium-4dv9d\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " pod="kube-system/cilium-4dv9d" Nov 1 00:24:09.016723 kubelet[2437]: I1101 00:24:09.016329 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/945a013b-d830-464c-bfe8-c4705221a958-hubble-tls\") pod \"cilium-4dv9d\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " pod="kube-system/cilium-4dv9d" Nov 1 00:24:09.016723 kubelet[2437]: I1101 00:24:09.016344 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-cni-path\") pod \"cilium-4dv9d\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " pod="kube-system/cilium-4dv9d" Nov 1 00:24:09.016723 kubelet[2437]: I1101 00:24:09.016358 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-host-proc-sys-kernel\") pod \"cilium-4dv9d\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " pod="kube-system/cilium-4dv9d" Nov 1 00:24:09.016869 kubelet[2437]: I1101 00:24:09.016374 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-lib-modules\") pod \"cilium-4dv9d\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " pod="kube-system/cilium-4dv9d" Nov 1 00:24:09.048845 systemd[1]: Started sshd@21-10.200.20.48:22-10.200.16.10:51462.service. Nov 1 00:24:09.249760 env[1490]: time="2025-11-01T00:24:09.249299698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4dv9d,Uid:945a013b-d830-464c-bfe8-c4705221a958,Namespace:kube-system,Attempt:0,}" Nov 1 00:24:09.282661 env[1490]: time="2025-11-01T00:24:09.282465474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:24:09.282661 env[1490]: time="2025-11-01T00:24:09.282509194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:24:09.282661 env[1490]: time="2025-11-01T00:24:09.282520714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:24:09.282904 env[1490]: time="2025-11-01T00:24:09.282681837Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/54719cd2ff659a5f700cc3a3f48cc849e3a676a7a5d39a8b76d4ddaf09d06b37 pid=4157 runtime=io.containerd.runc.v2 Nov 1 00:24:09.296922 systemd[1]: Started cri-containerd-54719cd2ff659a5f700cc3a3f48cc849e3a676a7a5d39a8b76d4ddaf09d06b37.scope. Nov 1 00:24:09.324763 env[1490]: time="2025-11-01T00:24:09.324707414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4dv9d,Uid:945a013b-d830-464c-bfe8-c4705221a958,Namespace:kube-system,Attempt:0,} returns sandbox id \"54719cd2ff659a5f700cc3a3f48cc849e3a676a7a5d39a8b76d4ddaf09d06b37\"" Nov 1 00:24:09.334887 env[1490]: time="2025-11-01T00:24:09.334838953Z" level=info msg="CreateContainer within sandbox \"54719cd2ff659a5f700cc3a3f48cc849e3a676a7a5d39a8b76d4ddaf09d06b37\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:24:09.364783 env[1490]: time="2025-11-01T00:24:09.364719683Z" level=info msg="CreateContainer within sandbox \"54719cd2ff659a5f700cc3a3f48cc849e3a676a7a5d39a8b76d4ddaf09d06b37\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"23947bb3cff90e6134165711aa995dbc742e17f80e9adcfa49074944b3a54a85\"" Nov 1 00:24:09.366082 env[1490]: time="2025-11-01T00:24:09.365359212Z" level=info msg="StartContainer for \"23947bb3cff90e6134165711aa995dbc742e17f80e9adcfa49074944b3a54a85\"" Nov 1 00:24:09.381224 systemd[1]: Started cri-containerd-23947bb3cff90e6134165711aa995dbc742e17f80e9adcfa49074944b3a54a85.scope. Nov 1 00:24:09.393888 systemd[1]: cri-containerd-23947bb3cff90e6134165711aa995dbc742e17f80e9adcfa49074944b3a54a85.scope: Deactivated successfully. Nov 1 00:24:09.429427 env[1490]: time="2025-11-01T00:24:09.429375012Z" level=info msg="shim disconnected" id=23947bb3cff90e6134165711aa995dbc742e17f80e9adcfa49074944b3a54a85 Nov 1 00:24:09.429735 env[1490]: time="2025-11-01T00:24:09.429685296Z" level=warning msg="cleaning up after shim disconnected" id=23947bb3cff90e6134165711aa995dbc742e17f80e9adcfa49074944b3a54a85 namespace=k8s.io Nov 1 00:24:09.429861 env[1490]: time="2025-11-01T00:24:09.429846418Z" level=info msg="cleaning up dead shim" Nov 1 00:24:09.437204 env[1490]: time="2025-11-01T00:24:09.437160958Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4214 runtime=io.containerd.runc.v2\ntime=\"2025-11-01T00:24:09Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/23947bb3cff90e6134165711aa995dbc742e17f80e9adcfa49074944b3a54a85/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Nov 1 00:24:09.437659 env[1490]: time="2025-11-01T00:24:09.437563164Z" level=error msg="copy shim log" error="read /proc/self/fd/55: file already closed" Nov 1 00:24:09.438788 env[1490]: time="2025-11-01T00:24:09.438751100Z" level=error msg="Failed to pipe stderr of container \"23947bb3cff90e6134165711aa995dbc742e17f80e9adcfa49074944b3a54a85\"" error="reading from a closed fifo" Nov 1 00:24:09.438952 env[1490]: time="2025-11-01T00:24:09.438926143Z" level=error msg="Failed to pipe stdout of container \"23947bb3cff90e6134165711aa995dbc742e17f80e9adcfa49074944b3a54a85\"" error="reading from a closed fifo" Nov 1 00:24:09.445744 env[1490]: time="2025-11-01T00:24:09.445653595Z" level=error msg="StartContainer for \"23947bb3cff90e6134165711aa995dbc742e17f80e9adcfa49074944b3a54a85\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Nov 1 00:24:09.445986 kubelet[2437]: E1101 00:24:09.445946 2437 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="23947bb3cff90e6134165711aa995dbc742e17f80e9adcfa49074944b3a54a85" Nov 1 00:24:09.446077 kubelet[2437]: E1101 00:24:09.446041 2437 kuberuntime_manager.go:1449] "Unhandled Error" err="init container mount-cgroup start failed in pod cilium-4dv9d_kube-system(945a013b-d830-464c-bfe8-c4705221a958): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" logger="UnhandledError" Nov 1 00:24:09.446117 kubelet[2437]: E1101 00:24:09.446078 2437 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4dv9d" podUID="945a013b-d830-464c-bfe8-c4705221a958" Nov 1 00:24:09.511938 sshd[4142]: Accepted publickey for core from 10.200.16.10 port 51462 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:24:09.512532 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:09.516704 systemd[1]: Started session-24.scope. Nov 1 00:24:09.517636 systemd-logind[1467]: New session 24 of user core. Nov 1 00:24:09.927786 sshd[4142]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:09.931105 systemd[1]: sshd@21-10.200.20.48:22-10.200.16.10:51462.service: Deactivated successfully. Nov 1 00:24:09.931433 systemd-logind[1467]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:24:09.931833 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:24:09.932633 systemd-logind[1467]: Removed session 24. Nov 1 00:24:10.005534 systemd[1]: Started sshd@22-10.200.20.48:22-10.200.16.10:45332.service. Nov 1 00:24:10.108286 env[1490]: time="2025-11-01T00:24:10.108248296Z" level=info msg="StopPodSandbox for \"54719cd2ff659a5f700cc3a3f48cc849e3a676a7a5d39a8b76d4ddaf09d06b37\"" Nov 1 00:24:10.108501 env[1490]: time="2025-11-01T00:24:10.108479500Z" level=info msg="Container to stop \"23947bb3cff90e6134165711aa995dbc742e17f80e9adcfa49074944b3a54a85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:24:10.116532 systemd[1]: cri-containerd-54719cd2ff659a5f700cc3a3f48cc849e3a676a7a5d39a8b76d4ddaf09d06b37.scope: Deactivated successfully. Nov 1 00:24:10.130103 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-54719cd2ff659a5f700cc3a3f48cc849e3a676a7a5d39a8b76d4ddaf09d06b37-shm.mount: Deactivated successfully. Nov 1 00:24:10.145764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54719cd2ff659a5f700cc3a3f48cc849e3a676a7a5d39a8b76d4ddaf09d06b37-rootfs.mount: Deactivated successfully. Nov 1 00:24:10.162464 env[1490]: time="2025-11-01T00:24:10.162403060Z" level=info msg="shim disconnected" id=54719cd2ff659a5f700cc3a3f48cc849e3a676a7a5d39a8b76d4ddaf09d06b37 Nov 1 00:24:10.162464 env[1490]: time="2025-11-01T00:24:10.162459501Z" level=warning msg="cleaning up after shim disconnected" id=54719cd2ff659a5f700cc3a3f48cc849e3a676a7a5d39a8b76d4ddaf09d06b37 namespace=k8s.io Nov 1 00:24:10.162464 env[1490]: time="2025-11-01T00:24:10.162471341Z" level=info msg="cleaning up dead shim" Nov 1 00:24:10.175246 env[1490]: time="2025-11-01T00:24:10.175195231Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4256 runtime=io.containerd.runc.v2\n" Nov 1 00:24:10.175525 env[1490]: time="2025-11-01T00:24:10.175495635Z" level=info msg="TearDown network for sandbox \"54719cd2ff659a5f700cc3a3f48cc849e3a676a7a5d39a8b76d4ddaf09d06b37\" successfully" Nov 1 00:24:10.175574 env[1490]: time="2025-11-01T00:24:10.175523116Z" level=info msg="StopPodSandbox for \"54719cd2ff659a5f700cc3a3f48cc849e3a676a7a5d39a8b76d4ddaf09d06b37\" returns successfully" Nov 1 00:24:10.323918 kubelet[2437]: I1101 00:24:10.323864 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-hostproc\") pod \"945a013b-d830-464c-bfe8-c4705221a958\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " Nov 1 00:24:10.324302 kubelet[2437]: I1101 00:24:10.323921 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-cilium-run\") pod \"945a013b-d830-464c-bfe8-c4705221a958\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " Nov 1 00:24:10.324302 kubelet[2437]: I1101 00:24:10.323959 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-lib-modules\") pod \"945a013b-d830-464c-bfe8-c4705221a958\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " Nov 1 00:24:10.324302 kubelet[2437]: I1101 00:24:10.323986 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-xtables-lock\") pod \"945a013b-d830-464c-bfe8-c4705221a958\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " Nov 1 00:24:10.324302 kubelet[2437]: I1101 00:24:10.324014 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-cilium-cgroup\") pod \"945a013b-d830-464c-bfe8-c4705221a958\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " Nov 1 00:24:10.324302 kubelet[2437]: I1101 00:24:10.324038 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-host-proc-sys-net\") pod \"945a013b-d830-464c-bfe8-c4705221a958\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " Nov 1 00:24:10.324302 kubelet[2437]: I1101 00:24:10.324058 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/945a013b-d830-464c-bfe8-c4705221a958-clustermesh-secrets\") pod \"945a013b-d830-464c-bfe8-c4705221a958\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " Nov 1 00:24:10.324449 kubelet[2437]: I1101 00:24:10.324072 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-bpf-maps\") pod \"945a013b-d830-464c-bfe8-c4705221a958\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " Nov 1 00:24:10.324449 kubelet[2437]: I1101 00:24:10.324088 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-cni-path\") pod \"945a013b-d830-464c-bfe8-c4705221a958\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " Nov 1 00:24:10.324449 kubelet[2437]: I1101 00:24:10.324101 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-host-proc-sys-kernel\") pod \"945a013b-d830-464c-bfe8-c4705221a958\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " Nov 1 00:24:10.324449 kubelet[2437]: I1101 00:24:10.324118 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/945a013b-d830-464c-bfe8-c4705221a958-cilium-ipsec-secrets\") pod \"945a013b-d830-464c-bfe8-c4705221a958\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " Nov 1 00:24:10.324449 kubelet[2437]: I1101 00:24:10.324136 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/945a013b-d830-464c-bfe8-c4705221a958-cilium-config-path\") pod \"945a013b-d830-464c-bfe8-c4705221a958\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " Nov 1 00:24:10.324449 kubelet[2437]: I1101 00:24:10.324152 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/945a013b-d830-464c-bfe8-c4705221a958-hubble-tls\") pod \"945a013b-d830-464c-bfe8-c4705221a958\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " Nov 1 00:24:10.324609 kubelet[2437]: I1101 00:24:10.324165 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-etc-cni-netd\") pod \"945a013b-d830-464c-bfe8-c4705221a958\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " Nov 1 00:24:10.324609 kubelet[2437]: I1101 00:24:10.324184 2437 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mql4c\" (UniqueName: \"kubernetes.io/projected/945a013b-d830-464c-bfe8-c4705221a958-kube-api-access-mql4c\") pod \"945a013b-d830-464c-bfe8-c4705221a958\" (UID: \"945a013b-d830-464c-bfe8-c4705221a958\") " Nov 1 00:24:10.324791 kubelet[2437]: I1101 00:24:10.324743 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "945a013b-d830-464c-bfe8-c4705221a958" (UID: "945a013b-d830-464c-bfe8-c4705221a958"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:10.324895 kubelet[2437]: I1101 00:24:10.324883 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-cni-path" (OuterVolumeSpecName: "cni-path") pod "945a013b-d830-464c-bfe8-c4705221a958" (UID: "945a013b-d830-464c-bfe8-c4705221a958"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:10.324984 kubelet[2437]: I1101 00:24:10.324969 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "945a013b-d830-464c-bfe8-c4705221a958" (UID: "945a013b-d830-464c-bfe8-c4705221a958"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:10.328161 systemd[1]: var-lib-kubelet-pods-945a013b\x2dd830\x2d464c\x2dbfe8\x2dc4705221a958-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmql4c.mount: Deactivated successfully. Nov 1 00:24:10.330731 kubelet[2437]: I1101 00:24:10.330671 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "945a013b-d830-464c-bfe8-c4705221a958" (UID: "945a013b-d830-464c-bfe8-c4705221a958"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:10.331290 kubelet[2437]: I1101 00:24:10.331265 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/945a013b-d830-464c-bfe8-c4705221a958-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "945a013b-d830-464c-bfe8-c4705221a958" (UID: "945a013b-d830-464c-bfe8-c4705221a958"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:24:10.331363 kubelet[2437]: I1101 00:24:10.331347 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/945a013b-d830-464c-bfe8-c4705221a958-kube-api-access-mql4c" (OuterVolumeSpecName: "kube-api-access-mql4c") pod "945a013b-d830-464c-bfe8-c4705221a958" (UID: "945a013b-d830-464c-bfe8-c4705221a958"). InnerVolumeSpecName "kube-api-access-mql4c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:24:10.331407 kubelet[2437]: I1101 00:24:10.331370 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "945a013b-d830-464c-bfe8-c4705221a958" (UID: "945a013b-d830-464c-bfe8-c4705221a958"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:10.331407 kubelet[2437]: I1101 00:24:10.331386 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-hostproc" (OuterVolumeSpecName: "hostproc") pod "945a013b-d830-464c-bfe8-c4705221a958" (UID: "945a013b-d830-464c-bfe8-c4705221a958"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:10.331407 kubelet[2437]: I1101 00:24:10.331401 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "945a013b-d830-464c-bfe8-c4705221a958" (UID: "945a013b-d830-464c-bfe8-c4705221a958"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:10.331572 kubelet[2437]: I1101 00:24:10.331415 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "945a013b-d830-464c-bfe8-c4705221a958" (UID: "945a013b-d830-464c-bfe8-c4705221a958"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:10.331572 kubelet[2437]: I1101 00:24:10.331429 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "945a013b-d830-464c-bfe8-c4705221a958" (UID: "945a013b-d830-464c-bfe8-c4705221a958"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:10.331572 kubelet[2437]: I1101 00:24:10.331443 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "945a013b-d830-464c-bfe8-c4705221a958" (UID: "945a013b-d830-464c-bfe8-c4705221a958"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:10.333480 systemd[1]: var-lib-kubelet-pods-945a013b\x2dd830\x2d464c\x2dbfe8\x2dc4705221a958-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:24:10.334832 kubelet[2437]: I1101 00:24:10.334075 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/945a013b-d830-464c-bfe8-c4705221a958-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "945a013b-d830-464c-bfe8-c4705221a958" (UID: "945a013b-d830-464c-bfe8-c4705221a958"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:24:10.338907 kubelet[2437]: I1101 00:24:10.336957 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945a013b-d830-464c-bfe8-c4705221a958-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "945a013b-d830-464c-bfe8-c4705221a958" (UID: "945a013b-d830-464c-bfe8-c4705221a958"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:24:10.338015 systemd[1]: var-lib-kubelet-pods-945a013b\x2dd830\x2d464c\x2dbfe8\x2dc4705221a958-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:24:10.338115 systemd[1]: var-lib-kubelet-pods-945a013b\x2dd830\x2d464c\x2dbfe8\x2dc4705221a958-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Nov 1 00:24:10.339301 kubelet[2437]: I1101 00:24:10.339272 2437 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945a013b-d830-464c-bfe8-c4705221a958-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "945a013b-d830-464c-bfe8-c4705221a958" (UID: "945a013b-d830-464c-bfe8-c4705221a958"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:24:10.424493 kubelet[2437]: I1101 00:24:10.424459 2437 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-lib-modules\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:10.424671 kubelet[2437]: I1101 00:24:10.424659 2437 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-xtables-lock\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:10.424761 kubelet[2437]: I1101 00:24:10.424751 2437 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-cilium-cgroup\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:10.424832 kubelet[2437]: I1101 00:24:10.424821 2437 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-host-proc-sys-net\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:10.424898 kubelet[2437]: I1101 00:24:10.424888 2437 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/945a013b-d830-464c-bfe8-c4705221a958-clustermesh-secrets\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:10.424962 kubelet[2437]: I1101 00:24:10.424944 2437 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-bpf-maps\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:10.425024 kubelet[2437]: I1101 00:24:10.425015 2437 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-cni-path\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:10.425086 kubelet[2437]: I1101 00:24:10.425069 2437 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:10.425151 kubelet[2437]: I1101 00:24:10.425134 2437 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/945a013b-d830-464c-bfe8-c4705221a958-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:10.425219 kubelet[2437]: I1101 00:24:10.425209 2437 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/945a013b-d830-464c-bfe8-c4705221a958-cilium-config-path\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:10.425278 kubelet[2437]: I1101 00:24:10.425269 2437 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/945a013b-d830-464c-bfe8-c4705221a958-hubble-tls\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:10.425344 kubelet[2437]: I1101 00:24:10.425327 2437 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-etc-cni-netd\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:10.425402 kubelet[2437]: I1101 00:24:10.425392 2437 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mql4c\" (UniqueName: \"kubernetes.io/projected/945a013b-d830-464c-bfe8-c4705221a958-kube-api-access-mql4c\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:10.425465 kubelet[2437]: I1101 00:24:10.425447 2437 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-hostproc\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:10.425522 kubelet[2437]: I1101 00:24:10.425513 2437 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/945a013b-d830-464c-bfe8-c4705221a958-cilium-run\") on node \"ci-3510.3.8-n-ec0975c3e1\" DevicePath \"\"" Nov 1 00:24:10.465432 sshd[4236]: Accepted publickey for core from 10.200.16.10 port 45332 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:24:10.466785 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:10.471307 systemd[1]: Started session-25.scope. Nov 1 00:24:10.471612 systemd-logind[1467]: New session 25 of user core. Nov 1 00:24:10.758799 systemd[1]: Removed slice kubepods-burstable-pod945a013b_d830_464c_bfe8_c4705221a958.slice. Nov 1 00:24:11.111416 kubelet[2437]: I1101 00:24:11.111328 2437 scope.go:117] "RemoveContainer" containerID="23947bb3cff90e6134165711aa995dbc742e17f80e9adcfa49074944b3a54a85" Nov 1 00:24:11.115179 env[1490]: time="2025-11-01T00:24:11.114863991Z" level=info msg="RemoveContainer for \"23947bb3cff90e6134165711aa995dbc742e17f80e9adcfa49074944b3a54a85\"" Nov 1 00:24:11.126215 env[1490]: time="2025-11-01T00:24:11.126094777Z" level=info msg="RemoveContainer for \"23947bb3cff90e6134165711aa995dbc742e17f80e9adcfa49074944b3a54a85\" returns successfully" Nov 1 00:24:11.209783 systemd[1]: Created slice kubepods-burstable-pod1e2c1c22_9374_4732_88c7_2d08e17bb065.slice. Nov 1 00:24:11.331702 kubelet[2437]: I1101 00:24:11.331651 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e2c1c22-9374-4732-88c7-2d08e17bb065-host-proc-sys-net\") pod \"cilium-l2bmc\" (UID: \"1e2c1c22-9374-4732-88c7-2d08e17bb065\") " pod="kube-system/cilium-l2bmc" Nov 1 00:24:11.332023 kubelet[2437]: I1101 00:24:11.331716 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e2c1c22-9374-4732-88c7-2d08e17bb065-clustermesh-secrets\") pod \"cilium-l2bmc\" (UID: \"1e2c1c22-9374-4732-88c7-2d08e17bb065\") " pod="kube-system/cilium-l2bmc" Nov 1 00:24:11.332023 kubelet[2437]: I1101 00:24:11.331737 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e2c1c22-9374-4732-88c7-2d08e17bb065-cilium-cgroup\") pod \"cilium-l2bmc\" (UID: \"1e2c1c22-9374-4732-88c7-2d08e17bb065\") " pod="kube-system/cilium-l2bmc" Nov 1 00:24:11.332023 kubelet[2437]: I1101 00:24:11.331753 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e2c1c22-9374-4732-88c7-2d08e17bb065-bpf-maps\") pod \"cilium-l2bmc\" (UID: \"1e2c1c22-9374-4732-88c7-2d08e17bb065\") " pod="kube-system/cilium-l2bmc" Nov 1 00:24:11.332023 kubelet[2437]: I1101 00:24:11.331776 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e2c1c22-9374-4732-88c7-2d08e17bb065-etc-cni-netd\") pod \"cilium-l2bmc\" (UID: \"1e2c1c22-9374-4732-88c7-2d08e17bb065\") " pod="kube-system/cilium-l2bmc" Nov 1 00:24:11.332023 kubelet[2437]: I1101 00:24:11.331795 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9mx9\" (UniqueName: \"kubernetes.io/projected/1e2c1c22-9374-4732-88c7-2d08e17bb065-kube-api-access-d9mx9\") pod \"cilium-l2bmc\" (UID: \"1e2c1c22-9374-4732-88c7-2d08e17bb065\") " pod="kube-system/cilium-l2bmc" Nov 1 00:24:11.332023 kubelet[2437]: I1101 00:24:11.331811 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e2c1c22-9374-4732-88c7-2d08e17bb065-hostproc\") pod \"cilium-l2bmc\" (UID: \"1e2c1c22-9374-4732-88c7-2d08e17bb065\") " pod="kube-system/cilium-l2bmc" Nov 1 00:24:11.332171 kubelet[2437]: I1101 00:24:11.331826 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1e2c1c22-9374-4732-88c7-2d08e17bb065-cilium-ipsec-secrets\") pod \"cilium-l2bmc\" (UID: \"1e2c1c22-9374-4732-88c7-2d08e17bb065\") " pod="kube-system/cilium-l2bmc" Nov 1 00:24:11.332171 kubelet[2437]: I1101 00:24:11.331841 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e2c1c22-9374-4732-88c7-2d08e17bb065-cni-path\") pod \"cilium-l2bmc\" (UID: \"1e2c1c22-9374-4732-88c7-2d08e17bb065\") " pod="kube-system/cilium-l2bmc" Nov 1 00:24:11.332171 kubelet[2437]: I1101 00:24:11.331871 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e2c1c22-9374-4732-88c7-2d08e17bb065-host-proc-sys-kernel\") pod \"cilium-l2bmc\" (UID: \"1e2c1c22-9374-4732-88c7-2d08e17bb065\") " pod="kube-system/cilium-l2bmc" Nov 1 00:24:11.332171 kubelet[2437]: I1101 00:24:11.331886 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e2c1c22-9374-4732-88c7-2d08e17bb065-lib-modules\") pod \"cilium-l2bmc\" (UID: \"1e2c1c22-9374-4732-88c7-2d08e17bb065\") " pod="kube-system/cilium-l2bmc" Nov 1 00:24:11.332171 kubelet[2437]: I1101 00:24:11.331899 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e2c1c22-9374-4732-88c7-2d08e17bb065-hubble-tls\") pod \"cilium-l2bmc\" (UID: \"1e2c1c22-9374-4732-88c7-2d08e17bb065\") " pod="kube-system/cilium-l2bmc" Nov 1 00:24:11.332171 kubelet[2437]: I1101 00:24:11.331916 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e2c1c22-9374-4732-88c7-2d08e17bb065-cilium-run\") pod \"cilium-l2bmc\" (UID: \"1e2c1c22-9374-4732-88c7-2d08e17bb065\") " pod="kube-system/cilium-l2bmc" Nov 1 00:24:11.332301 kubelet[2437]: I1101 00:24:11.331942 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e2c1c22-9374-4732-88c7-2d08e17bb065-xtables-lock\") pod \"cilium-l2bmc\" (UID: \"1e2c1c22-9374-4732-88c7-2d08e17bb065\") " pod="kube-system/cilium-l2bmc" Nov 1 00:24:11.332301 kubelet[2437]: I1101 00:24:11.331957 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e2c1c22-9374-4732-88c7-2d08e17bb065-cilium-config-path\") pod \"cilium-l2bmc\" (UID: \"1e2c1c22-9374-4732-88c7-2d08e17bb065\") " pod="kube-system/cilium-l2bmc" Nov 1 00:24:11.519343 env[1490]: time="2025-11-01T00:24:11.519256649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l2bmc,Uid:1e2c1c22-9374-4732-88c7-2d08e17bb065,Namespace:kube-system,Attempt:0,}" Nov 1 00:24:11.549992 env[1490]: time="2025-11-01T00:24:11.549912687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:24:11.550192 env[1490]: time="2025-11-01T00:24:11.550168731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:24:11.550290 env[1490]: time="2025-11-01T00:24:11.550270052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:24:11.550542 env[1490]: time="2025-11-01T00:24:11.550514335Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c630f2f19a8431cb848002ce947b7703f11d6f5b42a93ba906969bde408dc606 pid=4292 runtime=io.containerd.runc.v2 Nov 1 00:24:11.560260 systemd[1]: Started cri-containerd-c630f2f19a8431cb848002ce947b7703f11d6f5b42a93ba906969bde408dc606.scope. Nov 1 00:24:11.583286 env[1490]: time="2025-11-01T00:24:11.583241801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l2bmc,Uid:1e2c1c22-9374-4732-88c7-2d08e17bb065,Namespace:kube-system,Attempt:0,} returns sandbox id \"c630f2f19a8431cb848002ce947b7703f11d6f5b42a93ba906969bde408dc606\"" Nov 1 00:24:11.593421 env[1490]: time="2025-11-01T00:24:11.593373252Z" level=info msg="CreateContainer within sandbox \"c630f2f19a8431cb848002ce947b7703f11d6f5b42a93ba906969bde408dc606\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:24:11.629071 env[1490]: time="2025-11-01T00:24:11.629024036Z" level=info msg="CreateContainer within sandbox \"c630f2f19a8431cb848002ce947b7703f11d6f5b42a93ba906969bde408dc606\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eebc30c07654024c64350d5351f1afc80b938767d86e7b199a793bea74bff082\"" Nov 1 00:24:11.631609 env[1490]: time="2025-11-01T00:24:11.631545349Z" level=info msg="StartContainer for \"eebc30c07654024c64350d5351f1afc80b938767d86e7b199a793bea74bff082\"" Nov 1 00:24:11.646114 systemd[1]: Started cri-containerd-eebc30c07654024c64350d5351f1afc80b938767d86e7b199a793bea74bff082.scope. Nov 1 00:24:11.675634 env[1490]: time="2025-11-01T00:24:11.675582921Z" level=info msg="StartContainer for \"eebc30c07654024c64350d5351f1afc80b938767d86e7b199a793bea74bff082\" returns successfully" Nov 1 00:24:11.682659 systemd[1]: cri-containerd-eebc30c07654024c64350d5351f1afc80b938767d86e7b199a793bea74bff082.scope: Deactivated successfully. Nov 1 00:24:11.727599 env[1490]: time="2025-11-01T00:24:11.727552397Z" level=info msg="shim disconnected" id=eebc30c07654024c64350d5351f1afc80b938767d86e7b199a793bea74bff082 Nov 1 00:24:11.727874 env[1490]: time="2025-11-01T00:24:11.727855081Z" level=warning msg="cleaning up after shim disconnected" id=eebc30c07654024c64350d5351f1afc80b938767d86e7b199a793bea74bff082 namespace=k8s.io Nov 1 00:24:11.727939 env[1490]: time="2025-11-01T00:24:11.727926642Z" level=info msg="cleaning up dead shim" Nov 1 00:24:11.734950 env[1490]: time="2025-11-01T00:24:11.734907253Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4375 runtime=io.containerd.runc.v2\n" Nov 1 00:24:12.123683 env[1490]: time="2025-11-01T00:24:12.123635064Z" level=info msg="CreateContainer within sandbox \"c630f2f19a8431cb848002ce947b7703f11d6f5b42a93ba906969bde408dc606\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:24:12.162626 env[1490]: time="2025-11-01T00:24:12.162573397Z" level=info msg="CreateContainer within sandbox \"c630f2f19a8431cb848002ce947b7703f11d6f5b42a93ba906969bde408dc606\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a1b13008163e841ded2b33a934d22ea5950e825c17efdda5ed767d4b3e34f1df\"" Nov 1 00:24:12.163311 env[1490]: time="2025-11-01T00:24:12.163284886Z" level=info msg="StartContainer for \"a1b13008163e841ded2b33a934d22ea5950e825c17efdda5ed767d4b3e34f1df\"" Nov 1 00:24:12.181709 systemd[1]: Started cri-containerd-a1b13008163e841ded2b33a934d22ea5950e825c17efdda5ed767d4b3e34f1df.scope. Nov 1 00:24:12.219349 env[1490]: time="2025-11-01T00:24:12.219300834Z" level=info msg="StartContainer for \"a1b13008163e841ded2b33a934d22ea5950e825c17efdda5ed767d4b3e34f1df\" returns successfully" Nov 1 00:24:12.223964 systemd[1]: cri-containerd-a1b13008163e841ded2b33a934d22ea5950e825c17efdda5ed767d4b3e34f1df.scope: Deactivated successfully. Nov 1 00:24:12.252069 env[1490]: time="2025-11-01T00:24:12.252013768Z" level=info msg="shim disconnected" id=a1b13008163e841ded2b33a934d22ea5950e825c17efdda5ed767d4b3e34f1df Nov 1 00:24:12.252069 env[1490]: time="2025-11-01T00:24:12.252062288Z" level=warning msg="cleaning up after shim disconnected" id=a1b13008163e841ded2b33a934d22ea5950e825c17efdda5ed767d4b3e34f1df namespace=k8s.io Nov 1 00:24:12.252069 env[1490]: time="2025-11-01T00:24:12.252071768Z" level=info msg="cleaning up dead shim" Nov 1 00:24:12.258372 env[1490]: time="2025-11-01T00:24:12.258325447Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4436 runtime=io.containerd.runc.v2\n" Nov 1 00:24:12.537243 kubelet[2437]: W1101 00:24:12.537184 2437 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod945a013b_d830_464c_bfe8_c4705221a958.slice/cri-containerd-23947bb3cff90e6134165711aa995dbc742e17f80e9adcfa49074944b3a54a85.scope WatchSource:0}: container "23947bb3cff90e6134165711aa995dbc742e17f80e9adcfa49074944b3a54a85" in namespace "k8s.io": not found Nov 1 00:24:12.755660 kubelet[2437]: I1101 00:24:12.755629 2437 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="945a013b-d830-464c-bfe8-c4705221a958" path="/var/lib/kubelet/pods/945a013b-d830-464c-bfe8-c4705221a958/volumes" Nov 1 00:24:12.867624 kubelet[2437]: E1101 00:24:12.867264 2437 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:24:13.130264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1b13008163e841ded2b33a934d22ea5950e825c17efdda5ed767d4b3e34f1df-rootfs.mount: Deactivated successfully. Nov 1 00:24:13.133545 env[1490]: time="2025-11-01T00:24:13.133393267Z" level=info msg="CreateContainer within sandbox \"c630f2f19a8431cb848002ce947b7703f11d6f5b42a93ba906969bde408dc606\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:24:13.181534 env[1490]: time="2025-11-01T00:24:13.181461338Z" level=info msg="CreateContainer within sandbox \"c630f2f19a8431cb848002ce947b7703f11d6f5b42a93ba906969bde408dc606\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"01e3052105de751725443f6c1a4a930d2c7b59de7755a5b004edfcf41229fd0c\"" Nov 1 00:24:13.183718 env[1490]: time="2025-11-01T00:24:13.182124786Z" level=info msg="StartContainer for \"01e3052105de751725443f6c1a4a930d2c7b59de7755a5b004edfcf41229fd0c\"" Nov 1 00:24:13.207455 systemd[1]: Started cri-containerd-01e3052105de751725443f6c1a4a930d2c7b59de7755a5b004edfcf41229fd0c.scope. Nov 1 00:24:13.239969 systemd[1]: cri-containerd-01e3052105de751725443f6c1a4a930d2c7b59de7755a5b004edfcf41229fd0c.scope: Deactivated successfully. Nov 1 00:24:13.241251 env[1490]: time="2025-11-01T00:24:13.241046150Z" level=info msg="StartContainer for \"01e3052105de751725443f6c1a4a930d2c7b59de7755a5b004edfcf41229fd0c\" returns successfully" Nov 1 00:24:13.282395 env[1490]: time="2025-11-01T00:24:13.282350458Z" level=info msg="shim disconnected" id=01e3052105de751725443f6c1a4a930d2c7b59de7755a5b004edfcf41229fd0c Nov 1 00:24:13.282646 env[1490]: time="2025-11-01T00:24:13.282627661Z" level=warning msg="cleaning up after shim disconnected" id=01e3052105de751725443f6c1a4a930d2c7b59de7755a5b004edfcf41229fd0c namespace=k8s.io Nov 1 00:24:13.282731 env[1490]: time="2025-11-01T00:24:13.282717102Z" level=info msg="cleaning up dead shim" Nov 1 00:24:13.290421 env[1490]: time="2025-11-01T00:24:13.290378236Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4496 runtime=io.containerd.runc.v2\n" Nov 1 00:24:14.130341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01e3052105de751725443f6c1a4a930d2c7b59de7755a5b004edfcf41229fd0c-rootfs.mount: Deactivated successfully. Nov 1 00:24:14.138830 env[1490]: time="2025-11-01T00:24:14.138786177Z" level=info msg="CreateContainer within sandbox \"c630f2f19a8431cb848002ce947b7703f11d6f5b42a93ba906969bde408dc606\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:24:14.174954 env[1490]: time="2025-11-01T00:24:14.174906608Z" level=info msg="CreateContainer within sandbox \"c630f2f19a8431cb848002ce947b7703f11d6f5b42a93ba906969bde408dc606\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"db647c818563065b4dbeb15cf0485e24cfe9dd2c1c40cb9fd709792749fa769b\"" Nov 1 00:24:14.175972 env[1490]: time="2025-11-01T00:24:14.175918020Z" level=info msg="StartContainer for \"db647c818563065b4dbeb15cf0485e24cfe9dd2c1c40cb9fd709792749fa769b\"" Nov 1 00:24:14.198726 systemd[1]: Started cri-containerd-db647c818563065b4dbeb15cf0485e24cfe9dd2c1c40cb9fd709792749fa769b.scope. Nov 1 00:24:14.230005 systemd[1]: cri-containerd-db647c818563065b4dbeb15cf0485e24cfe9dd2c1c40cb9fd709792749fa769b.scope: Deactivated successfully. Nov 1 00:24:14.237915 env[1490]: time="2025-11-01T00:24:14.237867840Z" level=info msg="StartContainer for \"db647c818563065b4dbeb15cf0485e24cfe9dd2c1c40cb9fd709792749fa769b\" returns successfully" Nov 1 00:24:14.273790 env[1490]: time="2025-11-01T00:24:14.273736029Z" level=info msg="shim disconnected" id=db647c818563065b4dbeb15cf0485e24cfe9dd2c1c40cb9fd709792749fa769b Nov 1 00:24:14.273790 env[1490]: time="2025-11-01T00:24:14.273782589Z" level=warning msg="cleaning up after shim disconnected" id=db647c818563065b4dbeb15cf0485e24cfe9dd2c1c40cb9fd709792749fa769b namespace=k8s.io Nov 1 00:24:14.273790 env[1490]: time="2025-11-01T00:24:14.273791429Z" level=info msg="cleaning up dead shim" Nov 1 00:24:14.280624 env[1490]: time="2025-11-01T00:24:14.280576870Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4554 runtime=io.containerd.runc.v2\n" Nov 1 00:24:15.130924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db647c818563065b4dbeb15cf0485e24cfe9dd2c1c40cb9fd709792749fa769b-rootfs.mount: Deactivated successfully. Nov 1 00:24:15.142127 env[1490]: time="2025-11-01T00:24:15.142064672Z" level=info msg="CreateContainer within sandbox \"c630f2f19a8431cb848002ce947b7703f11d6f5b42a93ba906969bde408dc606\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:24:15.188017 env[1490]: time="2025-11-01T00:24:15.187963765Z" level=info msg="CreateContainer within sandbox \"c630f2f19a8431cb848002ce947b7703f11d6f5b42a93ba906969bde408dc606\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3d373f5425ccc3690f12a0156577bce93291329f6abb1708f5b5b27018831fdc\"" Nov 1 00:24:15.188756 env[1490]: time="2025-11-01T00:24:15.188719534Z" level=info msg="StartContainer for \"3d373f5425ccc3690f12a0156577bce93291329f6abb1708f5b5b27018831fdc\"" Nov 1 00:24:15.210593 systemd[1]: Started cri-containerd-3d373f5425ccc3690f12a0156577bce93291329f6abb1708f5b5b27018831fdc.scope. Nov 1 00:24:15.246765 env[1490]: time="2025-11-01T00:24:15.246647286Z" level=info msg="StartContainer for \"3d373f5425ccc3690f12a0156577bce93291329f6abb1708f5b5b27018831fdc\" returns successfully" Nov 1 00:24:15.651179 kubelet[2437]: W1101 00:24:15.649591 2437 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e2c1c22_9374_4732_88c7_2d08e17bb065.slice/cri-containerd-eebc30c07654024c64350d5351f1afc80b938767d86e7b199a793bea74bff082.scope WatchSource:0}: task eebc30c07654024c64350d5351f1afc80b938767d86e7b199a793bea74bff082 not found Nov 1 00:24:15.671861 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Nov 1 00:24:16.810456 kubelet[2437]: I1101 00:24:16.810394 2437 setters.go:543] "Node became not ready" node="ci-3510.3.8-n-ec0975c3e1" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T00:24:16Z","lastTransitionTime":"2025-11-01T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 1 00:24:16.962988 systemd[1]: run-containerd-runc-k8s.io-3d373f5425ccc3690f12a0156577bce93291329f6abb1708f5b5b27018831fdc-runc.NIv39q.mount: Deactivated successfully. Nov 1 00:24:18.367900 systemd-networkd[1648]: lxc_health: Link UP Nov 1 00:24:18.377389 systemd-networkd[1648]: lxc_health: Gained carrier Nov 1 00:24:18.377834 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:24:18.759778 kubelet[2437]: W1101 00:24:18.759718 2437 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e2c1c22_9374_4732_88c7_2d08e17bb065.slice/cri-containerd-a1b13008163e841ded2b33a934d22ea5950e825c17efdda5ed767d4b3e34f1df.scope WatchSource:0}: task a1b13008163e841ded2b33a934d22ea5950e825c17efdda5ed767d4b3e34f1df not found Nov 1 00:24:19.138172 systemd[1]: run-containerd-runc-k8s.io-3d373f5425ccc3690f12a0156577bce93291329f6abb1708f5b5b27018831fdc-runc.XRKszX.mount: Deactivated successfully. Nov 1 00:24:19.542947 kubelet[2437]: I1101 00:24:19.542879 2437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-l2bmc" podStartSLOduration=8.542861548 podStartE2EDuration="8.542861548s" podCreationTimestamp="2025-11-01 00:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:24:16.15906126 +0000 UTC m=+153.534580891" watchObservedRunningTime="2025-11-01 00:24:19.542861548 +0000 UTC m=+156.918381219" Nov 1 00:24:19.665847 systemd-networkd[1648]: lxc_health: Gained IPv6LL Nov 1 00:24:21.333149 systemd[1]: run-containerd-runc-k8s.io-3d373f5425ccc3690f12a0156577bce93291329f6abb1708f5b5b27018831fdc-runc.104DZI.mount: Deactivated successfully. Nov 1 00:24:21.867353 kubelet[2437]: W1101 00:24:21.867308 2437 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e2c1c22_9374_4732_88c7_2d08e17bb065.slice/cri-containerd-01e3052105de751725443f6c1a4a930d2c7b59de7755a5b004edfcf41229fd0c.scope WatchSource:0}: task 01e3052105de751725443f6c1a4a930d2c7b59de7755a5b004edfcf41229fd0c not found Nov 1 00:24:23.470149 systemd[1]: run-containerd-runc-k8s.io-3d373f5425ccc3690f12a0156577bce93291329f6abb1708f5b5b27018831fdc-runc.F5hD1T.mount: Deactivated successfully. Nov 1 00:24:23.610219 sshd[4236]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:23.613358 systemd[1]: sshd@22-10.200.20.48:22-10.200.16.10:45332.service: Deactivated successfully. Nov 1 00:24:23.614116 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:24:23.614738 systemd-logind[1467]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:24:23.615882 systemd-logind[1467]: Removed session 25. Nov 1 00:24:24.976091 kubelet[2437]: W1101 00:24:24.976040 2437 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e2c1c22_9374_4732_88c7_2d08e17bb065.slice/cri-containerd-db647c818563065b4dbeb15cf0485e24cfe9dd2c1c40cb9fd709792749fa769b.scope WatchSource:0}: task db647c818563065b4dbeb15cf0485e24cfe9dd2c1c40cb9fd709792749fa769b not found