Sep 6 01:22:35.032783 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 6 01:22:35.032802 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 5 23:00:12 -00 2025 Sep 6 01:22:35.032810 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 6 01:22:35.032817 kernel: printk: bootconsole [pl11] enabled Sep 6 01:22:35.032822 kernel: efi: EFI v2.70 by EDK II Sep 6 01:22:35.032827 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3761cf98 Sep 6 01:22:35.032833 kernel: random: crng init done Sep 6 01:22:35.032839 kernel: ACPI: Early table checksum verification disabled Sep 6 01:22:35.032845 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 6 01:22:35.032850 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:22:35.032856 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:22:35.032861 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 6 01:22:35.032868 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:22:35.032873 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:22:35.032880 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:22:35.032886 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:22:35.032892 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:22:35.032899 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:22:35.032904 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 6 01:22:35.032910 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:22:35.032916 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 6 01:22:35.032922 kernel: NUMA: Failed to initialise from firmware Sep 6 01:22:35.032927 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Sep 6 01:22:35.032933 kernel: NUMA: NODE_DATA [mem 0x1bf7f4900-0x1bf7f9fff] Sep 6 01:22:35.032939 kernel: Zone ranges: Sep 6 01:22:35.032944 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 6 01:22:35.032950 kernel: DMA32 empty Sep 6 01:22:35.032955 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 6 01:22:35.032962 kernel: Movable zone start for each node Sep 6 01:22:35.032968 kernel: Early memory node ranges Sep 6 01:22:35.032973 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 6 01:22:35.032980 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 6 01:22:35.032986 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 6 01:22:35.032992 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 6 01:22:35.032998 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 6 01:22:35.033003 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 6 01:22:35.033009 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 6 01:22:35.033015 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 6 01:22:35.033020 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 6 01:22:35.033026 kernel: psci: probing for conduit method from ACPI. Sep 6 01:22:35.033035 kernel: psci: PSCIv1.1 detected in firmware. Sep 6 01:22:35.033041 kernel: psci: Using standard PSCI v0.2 function IDs Sep 6 01:22:35.033047 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 6 01:22:35.033053 kernel: psci: SMC Calling Convention v1.4 Sep 6 01:22:35.033068 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Sep 6 01:22:35.033075 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Sep 6 01:22:35.033081 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 6 01:22:35.033088 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 6 01:22:35.033094 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 6 01:22:35.033100 kernel: Detected PIPT I-cache on CPU0 Sep 6 01:22:35.033106 kernel: CPU features: detected: GIC system register CPU interface Sep 6 01:22:35.033112 kernel: CPU features: detected: Hardware dirty bit management Sep 6 01:22:35.033118 kernel: CPU features: detected: Spectre-BHB Sep 6 01:22:35.033124 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 6 01:22:35.033130 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 6 01:22:35.033136 kernel: CPU features: detected: ARM erratum 1418040 Sep 6 01:22:35.033143 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 6 01:22:35.033149 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 6 01:22:35.033156 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 6 01:22:35.033161 kernel: Policy zone: Normal Sep 6 01:22:35.033169 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 01:22:35.033175 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 01:22:35.033182 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 01:22:35.033188 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 01:22:35.033194 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 01:22:35.033200 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Sep 6 01:22:35.033207 kernel: Memory: 3986884K/4194160K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 207276K reserved, 0K cma-reserved) Sep 6 01:22:35.033214 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 01:22:35.033220 kernel: trace event string verifier disabled Sep 6 01:22:35.033226 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 6 01:22:35.033233 kernel: rcu: RCU event tracing is enabled. Sep 6 01:22:35.033239 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 01:22:35.033245 kernel: Trampoline variant of Tasks RCU enabled. Sep 6 01:22:35.033252 kernel: Tracing variant of Tasks RCU enabled. Sep 6 01:22:35.033258 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 01:22:35.033264 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 01:22:35.033270 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 6 01:22:35.033276 kernel: GICv3: 960 SPIs implemented Sep 6 01:22:35.033283 kernel: GICv3: 0 Extended SPIs implemented Sep 6 01:22:35.033289 kernel: GICv3: Distributor has no Range Selector support Sep 6 01:22:35.033295 kernel: Root IRQ handler: gic_handle_irq Sep 6 01:22:35.033301 kernel: GICv3: 16 PPIs implemented Sep 6 01:22:35.033307 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 6 01:22:35.033313 kernel: ITS: No ITS available, not enabling LPIs Sep 6 01:22:35.033319 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 01:22:35.033325 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 6 01:22:35.037696 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 6 01:22:35.037716 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 6 01:22:35.037723 kernel: Console: colour dummy device 80x25 Sep 6 01:22:35.037737 kernel: printk: console [tty1] enabled Sep 6 01:22:35.037743 kernel: ACPI: Core revision 20210730 Sep 6 01:22:35.037750 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 6 01:22:35.037757 kernel: pid_max: default: 32768 minimum: 301 Sep 6 01:22:35.037763 kernel: LSM: Security Framework initializing Sep 6 01:22:35.037769 kernel: SELinux: Initializing. Sep 6 01:22:35.037776 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 01:22:35.037783 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 01:22:35.037789 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 6 01:22:35.037797 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 6 01:22:35.037804 kernel: rcu: Hierarchical SRCU implementation. Sep 6 01:22:35.037810 kernel: Remapping and enabling EFI services. Sep 6 01:22:35.037816 kernel: smp: Bringing up secondary CPUs ... Sep 6 01:22:35.037823 kernel: Detected PIPT I-cache on CPU1 Sep 6 01:22:35.037830 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 6 01:22:35.037837 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 01:22:35.037843 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 6 01:22:35.037850 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 01:22:35.037856 kernel: SMP: Total of 2 processors activated. Sep 6 01:22:35.037865 kernel: CPU features: detected: 32-bit EL0 Support Sep 6 01:22:35.037871 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 6 01:22:35.037879 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 6 01:22:35.037886 kernel: CPU features: detected: CRC32 instructions Sep 6 01:22:35.037893 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 6 01:22:35.037900 kernel: CPU features: detected: LSE atomic instructions Sep 6 01:22:35.037907 kernel: CPU features: detected: Privileged Access Never Sep 6 01:22:35.037913 kernel: CPU: All CPU(s) started at EL1 Sep 6 01:22:35.037920 kernel: alternatives: patching kernel code Sep 6 01:22:35.037929 kernel: devtmpfs: initialized Sep 6 01:22:35.037940 kernel: KASLR enabled Sep 6 01:22:35.037947 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 01:22:35.037956 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 01:22:35.037963 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 01:22:35.037970 kernel: SMBIOS 3.1.0 present. Sep 6 01:22:35.037977 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 6 01:22:35.037984 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 01:22:35.037992 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 6 01:22:35.038000 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 6 01:22:35.038008 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 6 01:22:35.038015 kernel: audit: initializing netlink subsys (disabled) Sep 6 01:22:35.038022 kernel: audit: type=2000 audit(0.086:1): state=initialized audit_enabled=0 res=1 Sep 6 01:22:35.038030 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 01:22:35.038037 kernel: cpuidle: using governor menu Sep 6 01:22:35.038044 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 6 01:22:35.038054 kernel: ASID allocator initialised with 32768 entries Sep 6 01:22:35.038061 kernel: ACPI: bus type PCI registered Sep 6 01:22:35.038068 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 01:22:35.038076 kernel: Serial: AMBA PL011 UART driver Sep 6 01:22:35.038084 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 01:22:35.038092 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 6 01:22:35.038099 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 01:22:35.038107 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 6 01:22:35.038115 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 01:22:35.038124 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 6 01:22:35.038132 kernel: ACPI: Added _OSI(Module Device) Sep 6 01:22:35.038139 kernel: ACPI: Added _OSI(Processor Device) Sep 6 01:22:35.038147 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 01:22:35.038155 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 01:22:35.038163 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 01:22:35.038170 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 01:22:35.038177 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 01:22:35.038186 kernel: ACPI: Interpreter enabled Sep 6 01:22:35.038196 kernel: ACPI: Using GIC for interrupt routing Sep 6 01:22:35.038204 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 6 01:22:35.038211 kernel: printk: console [ttyAMA0] enabled Sep 6 01:22:35.038219 kernel: printk: bootconsole [pl11] disabled Sep 6 01:22:35.038227 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 6 01:22:35.038235 kernel: iommu: Default domain type: Translated Sep 6 01:22:35.038242 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 6 01:22:35.038250 kernel: vgaarb: loaded Sep 6 01:22:35.038258 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 01:22:35.038265 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 01:22:35.038274 kernel: PTP clock support registered Sep 6 01:22:35.038281 kernel: Registered efivars operations Sep 6 01:22:35.038288 kernel: No ACPI PMU IRQ for CPU0 Sep 6 01:22:35.038296 kernel: No ACPI PMU IRQ for CPU1 Sep 6 01:22:35.038303 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 6 01:22:35.038310 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 01:22:35.038317 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 01:22:35.038323 kernel: pnp: PnP ACPI init Sep 6 01:22:35.038387 kernel: pnp: PnP ACPI: found 0 devices Sep 6 01:22:35.038397 kernel: NET: Registered PF_INET protocol family Sep 6 01:22:35.038404 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 01:22:35.038411 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 01:22:35.038418 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 01:22:35.038425 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 01:22:35.038432 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 6 01:22:35.038438 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 01:22:35.038445 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 01:22:35.038453 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 01:22:35.038460 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 01:22:35.038467 kernel: PCI: CLS 0 bytes, default 64 Sep 6 01:22:35.038474 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 6 01:22:35.038481 kernel: kvm [1]: HYP mode not available Sep 6 01:22:35.038487 kernel: Initialise system trusted keyrings Sep 6 01:22:35.038494 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 01:22:35.038501 kernel: Key type asymmetric registered Sep 6 01:22:35.038507 kernel: Asymmetric key parser 'x509' registered Sep 6 01:22:35.038515 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 01:22:35.038522 kernel: io scheduler mq-deadline registered Sep 6 01:22:35.038528 kernel: io scheduler kyber registered Sep 6 01:22:35.038535 kernel: io scheduler bfq registered Sep 6 01:22:35.038542 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 01:22:35.038548 kernel: thunder_xcv, ver 1.0 Sep 6 01:22:35.038555 kernel: thunder_bgx, ver 1.0 Sep 6 01:22:35.038562 kernel: nicpf, ver 1.0 Sep 6 01:22:35.038568 kernel: nicvf, ver 1.0 Sep 6 01:22:35.038715 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 6 01:22:35.038779 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-06T01:22:34 UTC (1757121754) Sep 6 01:22:35.038789 kernel: efifb: probing for efifb Sep 6 01:22:35.038796 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 6 01:22:35.038803 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 6 01:22:35.038810 kernel: efifb: scrolling: redraw Sep 6 01:22:35.038817 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 6 01:22:35.038823 kernel: Console: switching to colour frame buffer device 128x48 Sep 6 01:22:35.038832 kernel: fb0: EFI VGA frame buffer device Sep 6 01:22:35.038839 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 6 01:22:35.038845 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 01:22:35.038852 kernel: NET: Registered PF_INET6 protocol family Sep 6 01:22:35.038859 kernel: Segment Routing with IPv6 Sep 6 01:22:35.038865 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 01:22:35.038872 kernel: NET: Registered PF_PACKET protocol family Sep 6 01:22:35.038878 kernel: Key type dns_resolver registered Sep 6 01:22:35.038885 kernel: registered taskstats version 1 Sep 6 01:22:35.038892 kernel: Loading compiled-in X.509 certificates Sep 6 01:22:35.038900 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 72ab5ba99c2368429c7a4d04fccfc5a39dd84386' Sep 6 01:22:35.038906 kernel: Key type .fscrypt registered Sep 6 01:22:35.038913 kernel: Key type fscrypt-provisioning registered Sep 6 01:22:35.038920 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 01:22:35.038927 kernel: ima: Allocated hash algorithm: sha1 Sep 6 01:22:35.038933 kernel: ima: No architecture policies found Sep 6 01:22:35.038940 kernel: clk: Disabling unused clocks Sep 6 01:22:35.038946 kernel: Freeing unused kernel memory: 36416K Sep 6 01:22:35.038954 kernel: Run /init as init process Sep 6 01:22:35.038961 kernel: with arguments: Sep 6 01:22:35.038967 kernel: /init Sep 6 01:22:35.038974 kernel: with environment: Sep 6 01:22:35.038981 kernel: HOME=/ Sep 6 01:22:35.038987 kernel: TERM=linux Sep 6 01:22:35.038994 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 01:22:35.039003 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 01:22:35.039013 systemd[1]: Detected virtualization microsoft. Sep 6 01:22:35.039021 systemd[1]: Detected architecture arm64. Sep 6 01:22:35.039028 systemd[1]: Running in initrd. Sep 6 01:22:35.039035 systemd[1]: No hostname configured, using default hostname. Sep 6 01:22:35.039042 systemd[1]: Hostname set to . Sep 6 01:22:35.039049 systemd[1]: Initializing machine ID from random generator. Sep 6 01:22:35.039056 systemd[1]: Queued start job for default target initrd.target. Sep 6 01:22:35.039063 systemd[1]: Started systemd-ask-password-console.path. Sep 6 01:22:35.039072 systemd[1]: Reached target cryptsetup.target. Sep 6 01:22:35.039079 systemd[1]: Reached target paths.target. Sep 6 01:22:35.039086 systemd[1]: Reached target slices.target. Sep 6 01:22:35.039093 systemd[1]: Reached target swap.target. Sep 6 01:22:35.039099 systemd[1]: Reached target timers.target. Sep 6 01:22:35.039107 systemd[1]: Listening on iscsid.socket. Sep 6 01:22:35.039114 systemd[1]: Listening on iscsiuio.socket. Sep 6 01:22:35.039121 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 01:22:35.039130 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 01:22:35.039138 systemd[1]: Listening on systemd-journald.socket. Sep 6 01:22:35.039145 systemd[1]: Listening on systemd-networkd.socket. Sep 6 01:22:35.039152 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 01:22:35.039159 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 01:22:35.039166 systemd[1]: Reached target sockets.target. Sep 6 01:22:35.039173 systemd[1]: Starting kmod-static-nodes.service... Sep 6 01:22:35.039180 systemd[1]: Finished network-cleanup.service. Sep 6 01:22:35.039187 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 01:22:35.039196 systemd[1]: Starting systemd-journald.service... Sep 6 01:22:35.039203 systemd[1]: Starting systemd-modules-load.service... Sep 6 01:22:35.039210 systemd[1]: Starting systemd-resolved.service... Sep 6 01:22:35.039222 systemd-journald[276]: Journal started Sep 6 01:22:35.039267 systemd-journald[276]: Runtime Journal (/run/log/journal/74945c51e0a34a88819846311935d4ec) is 8.0M, max 78.5M, 70.5M free. Sep 6 01:22:35.033882 systemd-modules-load[277]: Inserted module 'overlay' Sep 6 01:22:35.066126 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 01:22:35.078358 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 01:22:35.079020 systemd-resolved[278]: Positive Trust Anchors: Sep 6 01:22:35.095299 kernel: Bridge firewalling registered Sep 6 01:22:35.095323 systemd[1]: Started systemd-journald.service. Sep 6 01:22:35.079040 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 01:22:35.148208 kernel: audit: type=1130 audit(1757121755.112:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.148242 kernel: SCSI subsystem initialized Sep 6 01:22:35.148251 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 01:22:35.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.079068 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 01:22:35.212076 kernel: device-mapper: uevent: version 1.0.3 Sep 6 01:22:35.212100 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 01:22:35.212110 kernel: audit: type=1130 audit(1757121755.163:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.081295 systemd-resolved[278]: Defaulting to hostname 'linux'. Sep 6 01:22:35.099833 systemd-modules-load[277]: Inserted module 'br_netfilter' Sep 6 01:22:35.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.113473 systemd[1]: Started systemd-resolved.service. Sep 6 01:22:35.252915 kernel: audit: type=1130 audit(1757121755.222:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.152919 systemd-modules-load[277]: Inserted module 'dm_multipath' Sep 6 01:22:35.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.213961 systemd[1]: Finished kmod-static-nodes.service. Sep 6 01:22:35.321646 kernel: audit: type=1130 audit(1757121755.248:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.321673 kernel: audit: type=1130 audit(1757121755.274:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.321682 kernel: audit: type=1130 audit(1757121755.300:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.223346 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 01:22:35.249024 systemd[1]: Finished systemd-modules-load.service. Sep 6 01:22:35.275100 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 01:22:35.300794 systemd[1]: Reached target nss-lookup.target. Sep 6 01:22:35.330030 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 01:22:35.340980 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:22:35.364265 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 01:22:35.370961 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 01:22:35.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.384428 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:22:35.412515 kernel: audit: type=1130 audit(1757121755.383:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.408689 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 01:22:35.436644 kernel: audit: type=1130 audit(1757121755.408:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.436567 systemd[1]: Starting dracut-cmdline.service... Sep 6 01:22:35.460852 kernel: audit: type=1130 audit(1757121755.431:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.465467 dracut-cmdline[298]: dracut-dracut-053 Sep 6 01:22:35.469489 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 01:22:35.533361 kernel: Loading iSCSI transport class v2.0-870. Sep 6 01:22:35.548353 kernel: iscsi: registered transport (tcp) Sep 6 01:22:35.569582 kernel: iscsi: registered transport (qla4xxx) Sep 6 01:22:35.569628 kernel: QLogic iSCSI HBA Driver Sep 6 01:22:35.606125 systemd[1]: Finished dracut-cmdline.service. Sep 6 01:22:35.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:35.611620 systemd[1]: Starting dracut-pre-udev.service... Sep 6 01:22:35.681345 kernel: raid6: neonx8 gen() 13713 MB/s Sep 6 01:22:35.691392 kernel: raid6: neonx8 xor() 10746 MB/s Sep 6 01:22:35.705344 kernel: raid6: neonx4 gen() 13538 MB/s Sep 6 01:22:35.726347 kernel: raid6: neonx4 xor() 11124 MB/s Sep 6 01:22:35.747342 kernel: raid6: neonx2 gen() 13000 MB/s Sep 6 01:22:35.767341 kernel: raid6: neonx2 xor() 10310 MB/s Sep 6 01:22:35.788343 kernel: raid6: neonx1 gen() 10545 MB/s Sep 6 01:22:35.809371 kernel: raid6: neonx1 xor() 8786 MB/s Sep 6 01:22:35.830376 kernel: raid6: int64x8 gen() 6268 MB/s Sep 6 01:22:35.851377 kernel: raid6: int64x8 xor() 3544 MB/s Sep 6 01:22:35.872362 kernel: raid6: int64x4 gen() 7236 MB/s Sep 6 01:22:35.892345 kernel: raid6: int64x4 xor() 3851 MB/s Sep 6 01:22:35.913348 kernel: raid6: int64x2 gen() 6149 MB/s Sep 6 01:22:35.933342 kernel: raid6: int64x2 xor() 3322 MB/s Sep 6 01:22:35.953342 kernel: raid6: int64x1 gen() 5046 MB/s Sep 6 01:22:35.978975 kernel: raid6: int64x1 xor() 2646 MB/s Sep 6 01:22:35.978994 kernel: raid6: using algorithm neonx8 gen() 13713 MB/s Sep 6 01:22:35.979002 kernel: raid6: .... xor() 10746 MB/s, rmw enabled Sep 6 01:22:35.983634 kernel: raid6: using neon recovery algorithm Sep 6 01:22:36.004946 kernel: xor: measuring software checksum speed Sep 6 01:22:36.004959 kernel: 8regs : 17195 MB/sec Sep 6 01:22:36.008859 kernel: 32regs : 20676 MB/sec Sep 6 01:22:36.012686 kernel: arm64_neon : 27775 MB/sec Sep 6 01:22:36.012696 kernel: xor: using function: arm64_neon (27775 MB/sec) Sep 6 01:22:36.073361 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 6 01:22:36.084471 systemd[1]: Finished dracut-pre-udev.service. Sep 6 01:22:36.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:36.093000 audit: BPF prog-id=7 op=LOAD Sep 6 01:22:36.093000 audit: BPF prog-id=8 op=LOAD Sep 6 01:22:36.093832 systemd[1]: Starting systemd-udevd.service... Sep 6 01:22:36.112147 systemd-udevd[475]: Using default interface naming scheme 'v252'. Sep 6 01:22:36.119200 systemd[1]: Started systemd-udevd.service. Sep 6 01:22:36.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:36.129370 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 01:22:36.143818 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Sep 6 01:22:36.171710 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 01:22:36.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:36.177646 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 01:22:36.217231 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 01:22:36.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:36.275546 kernel: hv_vmbus: Vmbus version:5.3 Sep 6 01:22:36.277355 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 6 01:22:36.285360 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 6 01:22:36.311358 kernel: hv_vmbus: registering driver hv_storvsc Sep 6 01:22:36.319693 kernel: scsi host1: storvsc_host_t Sep 6 01:22:36.319886 kernel: scsi host0: storvsc_host_t Sep 6 01:22:36.319909 kernel: hv_vmbus: registering driver hid_hyperv Sep 6 01:22:36.331304 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 6 01:22:36.331394 kernel: hv_vmbus: registering driver hv_netvsc Sep 6 01:22:36.349092 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 6 01:22:36.349171 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 6 01:22:36.359030 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 6 01:22:36.384010 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 6 01:22:36.401771 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 6 01:22:36.412688 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 6 01:22:36.412702 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 6 01:22:36.412825 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 6 01:22:36.412908 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 6 01:22:36.412996 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 6 01:22:36.413087 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 6 01:22:36.413186 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 01:22:36.413197 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 6 01:22:36.468432 kernel: hv_netvsc 002248bd-7571-0022-48bd-7571002248bd eth0: VF slot 1 added Sep 6 01:22:36.478610 kernel: hv_vmbus: registering driver hv_pci Sep 6 01:22:36.487147 kernel: hv_pci f62f2839-5e2e-4664-8858-4278b14caa28: PCI VMBus probing: Using version 0x10004 Sep 6 01:22:36.559210 kernel: hv_pci f62f2839-5e2e-4664-8858-4278b14caa28: PCI host bridge to bus 5e2e:00 Sep 6 01:22:36.559320 kernel: pci_bus 5e2e:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 6 01:22:36.559450 kernel: pci_bus 5e2e:00: No busn resource found for root bus, will use [bus 00-ff] Sep 6 01:22:36.559527 kernel: pci 5e2e:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 6 01:22:36.559621 kernel: pci 5e2e:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 6 01:22:36.559696 kernel: pci 5e2e:00:02.0: enabling Extended Tags Sep 6 01:22:36.559783 kernel: pci 5e2e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5e2e:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 6 01:22:36.559871 kernel: pci_bus 5e2e:00: busn_res: [bus 00-ff] end is updated to 00 Sep 6 01:22:36.559946 kernel: pci 5e2e:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 6 01:22:36.597286 kernel: mlx5_core 5e2e:00:02.0: enabling device (0000 -> 0002) Sep 6 01:22:36.830595 kernel: mlx5_core 5e2e:00:02.0: firmware version: 16.30.1284 Sep 6 01:22:36.830712 kernel: mlx5_core 5e2e:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Sep 6 01:22:36.830797 kernel: hv_netvsc 002248bd-7571-0022-48bd-7571002248bd eth0: VF registering: eth1 Sep 6 01:22:36.830877 kernel: mlx5_core 5e2e:00:02.0 eth1: joined to eth0 Sep 6 01:22:36.840357 kernel: mlx5_core 5e2e:00:02.0 enP24110s1: renamed from eth1 Sep 6 01:22:36.944362 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (526) Sep 6 01:22:36.958067 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 01:22:36.974387 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 01:22:37.152324 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 01:22:37.183483 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 01:22:37.189687 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 01:22:37.203131 systemd[1]: Starting disk-uuid.service... Sep 6 01:22:37.230383 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 01:22:37.240378 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 01:22:38.249836 disk-uuid[605]: The operation has completed successfully. Sep 6 01:22:38.255212 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 01:22:38.311974 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 01:22:38.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:38.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:38.312073 systemd[1]: Finished disk-uuid.service. Sep 6 01:22:38.329368 systemd[1]: Starting verity-setup.service... Sep 6 01:22:38.369410 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 6 01:22:38.613305 systemd[1]: Found device dev-mapper-usr.device. Sep 6 01:22:38.619105 systemd[1]: Mounting sysusr-usr.mount... Sep 6 01:22:38.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:38.626862 systemd[1]: Finished verity-setup.service. Sep 6 01:22:38.691022 systemd[1]: Mounted sysusr-usr.mount. Sep 6 01:22:38.699057 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 01:22:38.695530 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 01:22:38.696396 systemd[1]: Starting ignition-setup.service... Sep 6 01:22:38.704281 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 01:22:38.740541 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 01:22:38.740602 kernel: BTRFS info (device sda6): using free space tree Sep 6 01:22:38.746409 kernel: BTRFS info (device sda6): has skinny extents Sep 6 01:22:38.803197 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 01:22:38.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:38.812000 audit: BPF prog-id=9 op=LOAD Sep 6 01:22:38.813489 systemd[1]: Starting systemd-networkd.service... Sep 6 01:22:38.829592 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 01:22:38.842805 systemd-networkd[874]: lo: Link UP Sep 6 01:22:38.842817 systemd-networkd[874]: lo: Gained carrier Sep 6 01:22:38.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:38.843238 systemd-networkd[874]: Enumeration completed Sep 6 01:22:38.846453 systemd[1]: Started systemd-networkd.service. Sep 6 01:22:38.847077 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:22:38.852440 systemd[1]: Reached target network.target. Sep 6 01:22:38.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:38.861639 systemd[1]: Starting iscsiuio.service... Sep 6 01:22:38.893948 iscsid[882]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 01:22:38.893948 iscsid[882]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 6 01:22:38.893948 iscsid[882]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 01:22:38.893948 iscsid[882]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 01:22:38.893948 iscsid[882]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 01:22:38.893948 iscsid[882]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 01:22:38.893948 iscsid[882]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 01:22:38.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:38.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:38.879466 systemd[1]: Started iscsiuio.service. Sep 6 01:22:38.889423 systemd[1]: Starting iscsid.service... Sep 6 01:22:38.897582 systemd[1]: Started iscsid.service. Sep 6 01:22:38.926669 systemd[1]: Starting dracut-initqueue.service... Sep 6 01:22:38.959616 systemd[1]: Finished dracut-initqueue.service. Sep 6 01:22:39.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:38.964862 systemd[1]: Reached target remote-fs-pre.target. Sep 6 01:22:39.076801 kernel: kauditd_printk_skb: 16 callbacks suppressed Sep 6 01:22:39.076825 kernel: audit: type=1130 audit(1757121759.024:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:39.076843 kernel: audit: type=1130 audit(1757121759.055:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:39.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:38.977808 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 01:22:38.990047 systemd[1]: Reached target remote-fs.target. Sep 6 01:22:38.999239 systemd[1]: Starting dracut-pre-mount.service... Sep 6 01:22:39.015500 systemd[1]: Finished ignition-setup.service. Sep 6 01:22:39.040713 systemd[1]: Finished dracut-pre-mount.service. Sep 6 01:22:39.110309 kernel: mlx5_core 5e2e:00:02.0 enP24110s1: Link up Sep 6 01:22:39.081593 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 01:22:39.150354 kernel: hv_netvsc 002248bd-7571-0022-48bd-7571002248bd eth0: Data path switched to VF: enP24110s1 Sep 6 01:22:39.150551 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 01:22:39.157121 systemd-networkd[874]: enP24110s1: Link UP Sep 6 01:22:39.157212 systemd-networkd[874]: eth0: Link UP Sep 6 01:22:39.157372 systemd-networkd[874]: eth0: Gained carrier Sep 6 01:22:39.168962 systemd-networkd[874]: enP24110s1: Gained carrier Sep 6 01:22:39.184416 systemd-networkd[874]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 6 01:22:40.515476 systemd-networkd[874]: eth0: Gained IPv6LL Sep 6 01:22:41.620657 ignition[897]: Ignition 2.14.0 Sep 6 01:22:41.624164 ignition[897]: Stage: fetch-offline Sep 6 01:22:41.624265 ignition[897]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:22:41.624295 ignition[897]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:22:41.702313 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:22:41.702488 ignition[897]: parsed url from cmdline: "" Sep 6 01:22:41.708779 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 01:22:41.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:41.702493 ignition[897]: no config URL provided Sep 6 01:22:41.747434 kernel: audit: type=1130 audit(1757121761.713:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:41.735622 systemd[1]: Starting ignition-fetch.service... Sep 6 01:22:41.702498 ignition[897]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 01:22:41.702506 ignition[897]: no config at "/usr/lib/ignition/user.ign" Sep 6 01:22:41.702511 ignition[897]: failed to fetch config: resource requires networking Sep 6 01:22:41.702631 ignition[897]: Ignition finished successfully Sep 6 01:22:41.752240 ignition[904]: Ignition 2.14.0 Sep 6 01:22:41.752247 ignition[904]: Stage: fetch Sep 6 01:22:41.752389 ignition[904]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:22:41.752413 ignition[904]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:22:41.759760 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:22:41.759988 ignition[904]: parsed url from cmdline: "" Sep 6 01:22:41.759992 ignition[904]: no config URL provided Sep 6 01:22:41.759997 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 01:22:41.760005 ignition[904]: no config at "/usr/lib/ignition/user.ign" Sep 6 01:22:41.760062 ignition[904]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 6 01:22:41.906751 ignition[904]: GET result: OK Sep 6 01:22:41.906818 ignition[904]: config has been read from IMDS userdata Sep 6 01:22:41.909697 unknown[904]: fetched base config from "system" Sep 6 01:22:41.943388 kernel: audit: type=1130 audit(1757121761.920:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:41.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:41.906842 ignition[904]: parsing config with SHA512: 453d5871aa9a125211a05b28b393333d26e853a4c5dd5e767132faa00c3026d98ff7a1aa0a5f6a577bb12175f0903253da2d359a2c845ca6d759183394f9c260 Sep 6 01:22:41.909705 unknown[904]: fetched base config from "system" Sep 6 01:22:41.910121 ignition[904]: fetch: fetch complete Sep 6 01:22:41.909710 unknown[904]: fetched user config from "azure" Sep 6 01:22:41.910126 ignition[904]: fetch: fetch passed Sep 6 01:22:41.911514 systemd[1]: Finished ignition-fetch.service. Sep 6 01:22:41.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:41.910180 ignition[904]: Ignition finished successfully Sep 6 01:22:41.994245 kernel: audit: type=1130 audit(1757121761.966:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:41.922037 systemd[1]: Starting ignition-kargs.service... Sep 6 01:22:42.024412 kernel: audit: type=1130 audit(1757121761.998:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:41.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:41.952762 ignition[910]: Ignition 2.14.0 Sep 6 01:22:41.962312 systemd[1]: Finished ignition-kargs.service. Sep 6 01:22:41.952769 ignition[910]: Stage: kargs Sep 6 01:22:41.968079 systemd[1]: Starting ignition-disks.service... Sep 6 01:22:41.952888 ignition[910]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:22:41.990550 systemd[1]: Finished ignition-disks.service. Sep 6 01:22:41.952907 ignition[910]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:22:41.998735 systemd[1]: Reached target initrd-root-device.target. Sep 6 01:22:41.955672 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:22:42.023614 systemd[1]: Reached target local-fs-pre.target. Sep 6 01:22:41.959326 ignition[910]: kargs: kargs passed Sep 6 01:22:42.028365 systemd[1]: Reached target local-fs.target. Sep 6 01:22:41.959385 ignition[910]: Ignition finished successfully Sep 6 01:22:42.035761 systemd[1]: Reached target sysinit.target. Sep 6 01:22:41.978462 ignition[916]: Ignition 2.14.0 Sep 6 01:22:42.044012 systemd[1]: Reached target basic.target. Sep 6 01:22:41.978470 ignition[916]: Stage: disks Sep 6 01:22:42.052913 systemd[1]: Starting systemd-fsck-root.service... Sep 6 01:22:41.978593 ignition[916]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:22:41.978619 ignition[916]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:22:42.124890 systemd-fsck[924]: ROOT: clean, 629/7326000 files, 481083/7359488 blocks Sep 6 01:22:41.982390 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:22:42.169279 kernel: audit: type=1130 audit(1757121762.142:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:42.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:42.128129 systemd[1]: Finished systemd-fsck-root.service. Sep 6 01:22:41.984064 ignition[916]: disks: disks passed Sep 6 01:22:42.143949 systemd[1]: Mounting sysroot.mount... Sep 6 01:22:41.984121 ignition[916]: Ignition finished successfully Sep 6 01:22:42.198355 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 01:22:42.198558 systemd[1]: Mounted sysroot.mount. Sep 6 01:22:42.202537 systemd[1]: Reached target initrd-root-fs.target. Sep 6 01:22:42.241739 systemd[1]: Mounting sysroot-usr.mount... Sep 6 01:22:42.250051 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 6 01:22:42.261220 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 01:22:42.261266 systemd[1]: Reached target ignition-diskful.target. Sep 6 01:22:42.277138 systemd[1]: Mounted sysroot-usr.mount. Sep 6 01:22:42.334718 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 01:22:42.339994 systemd[1]: Starting initrd-setup-root.service... Sep 6 01:22:42.364357 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (935) Sep 6 01:22:42.371969 initrd-setup-root[940]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 01:22:42.383324 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 01:22:42.383358 kernel: BTRFS info (device sda6): using free space tree Sep 6 01:22:42.388111 kernel: BTRFS info (device sda6): has skinny extents Sep 6 01:22:42.392852 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 01:22:42.404045 initrd-setup-root[966]: cut: /sysroot/etc/group: No such file or directory Sep 6 01:22:42.427992 initrd-setup-root[974]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 01:22:42.437565 initrd-setup-root[982]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 01:22:43.046962 systemd[1]: Finished initrd-setup-root.service. Sep 6 01:22:43.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:43.072665 systemd[1]: Starting ignition-mount.service... Sep 6 01:22:43.082691 kernel: audit: type=1130 audit(1757121763.051:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:43.082508 systemd[1]: Starting sysroot-boot.service... Sep 6 01:22:43.090160 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 6 01:22:43.090703 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 6 01:22:43.119681 systemd[1]: Finished sysroot-boot.service. Sep 6 01:22:43.150116 kernel: audit: type=1130 audit(1757121763.123:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:43.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:43.150185 ignition[1003]: INFO : Ignition 2.14.0 Sep 6 01:22:43.150185 ignition[1003]: INFO : Stage: mount Sep 6 01:22:43.150185 ignition[1003]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:22:43.150185 ignition[1003]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:22:43.150185 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:22:43.150185 ignition[1003]: INFO : mount: mount passed Sep 6 01:22:43.150185 ignition[1003]: INFO : Ignition finished successfully Sep 6 01:22:43.210236 kernel: audit: type=1130 audit(1757121763.153:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:43.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:43.137996 systemd[1]: Finished ignition-mount.service. Sep 6 01:22:43.519617 coreos-metadata[934]: Sep 06 01:22:43.519 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 6 01:22:43.528153 coreos-metadata[934]: Sep 06 01:22:43.523 INFO Fetch successful Sep 6 01:22:43.561860 coreos-metadata[934]: Sep 06 01:22:43.561 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 6 01:22:43.573461 coreos-metadata[934]: Sep 06 01:22:43.572 INFO Fetch successful Sep 6 01:22:43.626724 coreos-metadata[934]: Sep 06 01:22:43.626 INFO wrote hostname ci-3510.3.8-n-9a681c3ae9 to /sysroot/etc/hostname Sep 6 01:22:43.635202 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 6 01:22:43.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:43.641291 systemd[1]: Starting ignition-files.service... Sep 6 01:22:43.655853 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 01:22:43.677351 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1013) Sep 6 01:22:43.688883 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 01:22:43.688912 kernel: BTRFS info (device sda6): using free space tree Sep 6 01:22:43.688928 kernel: BTRFS info (device sda6): has skinny extents Sep 6 01:22:43.698505 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 01:22:43.715589 ignition[1032]: INFO : Ignition 2.14.0 Sep 6 01:22:43.719720 ignition[1032]: INFO : Stage: files Sep 6 01:22:43.719720 ignition[1032]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:22:43.719720 ignition[1032]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:22:43.743315 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:22:43.743315 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Sep 6 01:22:43.743315 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 01:22:43.743315 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 01:22:43.812970 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 01:22:43.820843 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 01:22:43.820843 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 01:22:43.813512 unknown[1032]: wrote ssh authorized keys file for user: core Sep 6 01:22:43.840842 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Sep 6 01:22:43.840842 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 01:22:43.840842 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 01:22:43.840842 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 01:22:43.840842 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 6 01:22:43.840842 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 6 01:22:43.840842 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 6 01:22:43.840842 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 01:22:43.840842 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4147755194" Sep 6 01:22:43.840842 ignition[1032]: CRITICAL : files: createFilesystemsFiles: createFiles: op(6): op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4147755194": device or resource busy Sep 6 01:22:43.840842 ignition[1032]: ERROR : files: createFilesystemsFiles: createFiles: op(6): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4147755194", trying btrfs: device or resource busy Sep 6 01:22:43.840842 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4147755194" Sep 6 01:22:43.840842 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4147755194" Sep 6 01:22:43.840842 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [started] unmounting "/mnt/oem4147755194" Sep 6 01:22:43.840806 systemd[1]: mnt-oem4147755194.mount: Deactivated successfully. Sep 6 01:22:44.001860 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [finished] unmounting "/mnt/oem4147755194" Sep 6 01:22:44.001860 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 6 01:22:44.001860 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 01:22:44.001860 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 01:22:44.001860 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1156925715" Sep 6 01:22:44.001860 ignition[1032]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1156925715": device or resource busy Sep 6 01:22:44.001860 ignition[1032]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1156925715", trying btrfs: device or resource busy Sep 6 01:22:44.001860 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1156925715" Sep 6 01:22:44.001860 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1156925715" Sep 6 01:22:44.001860 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem1156925715" Sep 6 01:22:44.001860 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem1156925715" Sep 6 01:22:44.001860 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 01:22:44.001860 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 6 01:22:44.001860 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 6 01:22:44.427565 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Sep 6 01:22:44.686003 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 6 01:22:44.698986 ignition[1032]: INFO : files: op(f): [started] processing unit "waagent.service" Sep 6 01:22:44.698986 ignition[1032]: INFO : files: op(f): [finished] processing unit "waagent.service" Sep 6 01:22:44.698986 ignition[1032]: INFO : files: op(10): [started] processing unit "nvidia.service" Sep 6 01:22:44.698986 ignition[1032]: INFO : files: op(10): [finished] processing unit "nvidia.service" Sep 6 01:22:44.698986 ignition[1032]: INFO : files: op(11): [started] setting preset to enabled for "waagent.service" Sep 6 01:22:44.698986 ignition[1032]: INFO : files: op(11): [finished] setting preset to enabled for "waagent.service" Sep 6 01:22:44.698986 ignition[1032]: INFO : files: op(12): [started] setting preset to enabled for "nvidia.service" Sep 6 01:22:44.698986 ignition[1032]: INFO : files: op(12): [finished] setting preset to enabled for "nvidia.service" Sep 6 01:22:44.698986 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 01:22:44.698986 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 01:22:44.698986 ignition[1032]: INFO : files: files passed Sep 6 01:22:44.698986 ignition[1032]: INFO : Ignition finished successfully Sep 6 01:22:44.948393 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 01:22:44.948423 kernel: audit: type=1130 audit(1757121764.710:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:44.948436 kernel: audit: type=1130 audit(1757121764.779:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:44.948446 kernel: audit: type=1131 audit(1757121764.779:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:44.948455 kernel: audit: type=1130 audit(1757121764.829:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:44.948464 kernel: audit: type=1130 audit(1757121764.908:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:44.948481 kernel: audit: type=1131 audit(1757121764.908:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:44.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:44.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:44.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:44.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:44.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:44.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:44.705776 systemd[1]: Finished ignition-files.service. Sep 6 01:22:44.713704 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 01:22:44.744452 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 01:22:44.981293 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 01:22:44.750470 systemd[1]: Starting ignition-quench.service... Sep 6 01:22:44.764447 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 01:22:45.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:44.764818 systemd[1]: Finished ignition-quench.service. Sep 6 01:22:45.033092 kernel: audit: type=1130 audit(1757121765.010:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:44.804190 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 01:22:44.829531 systemd[1]: Reached target ignition-complete.target. Sep 6 01:22:44.861939 systemd[1]: Starting initrd-parse-etc.service... Sep 6 01:22:44.904155 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 01:22:44.904266 systemd[1]: Finished initrd-parse-etc.service. Sep 6 01:22:44.909172 systemd[1]: Reached target initrd-fs.target. Sep 6 01:22:44.955466 systemd[1]: Reached target initrd.target. Sep 6 01:22:45.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:44.963716 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 01:22:45.112443 kernel: audit: type=1131 audit(1757121765.084:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:44.968723 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 01:22:45.006524 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 01:22:45.037536 systemd[1]: Starting initrd-cleanup.service... Sep 6 01:22:45.050227 systemd[1]: Stopped target nss-lookup.target. Sep 6 01:22:45.056830 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 01:22:45.067031 systemd[1]: Stopped target timers.target. Sep 6 01:22:45.075619 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 01:22:45.075738 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 01:22:45.105566 systemd[1]: Stopped target initrd.target. Sep 6 01:22:45.116596 systemd[1]: Stopped target basic.target. Sep 6 01:22:45.124937 systemd[1]: Stopped target ignition-complete.target. Sep 6 01:22:45.134110 systemd[1]: Stopped target ignition-diskful.target. Sep 6 01:22:45.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.143217 systemd[1]: Stopped target initrd-root-device.target. Sep 6 01:22:45.152247 systemd[1]: Stopped target remote-fs.target. Sep 6 01:22:45.248565 kernel: audit: type=1131 audit(1757121765.213:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.160991 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 01:22:45.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.169547 systemd[1]: Stopped target sysinit.target. Sep 6 01:22:45.179166 systemd[1]: Stopped target local-fs.target. Sep 6 01:22:45.287605 kernel: audit: type=1131 audit(1757121765.252:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.187761 systemd[1]: Stopped target local-fs-pre.target. Sep 6 01:22:45.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.196563 systemd[1]: Stopped target swap.target. Sep 6 01:22:45.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.204498 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 01:22:45.204617 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 01:22:45.235320 systemd[1]: Stopped target cryptsetup.target. Sep 6 01:22:45.336322 iscsid[882]: iscsid shutting down. Sep 6 01:22:45.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.351725 ignition[1071]: INFO : Ignition 2.14.0 Sep 6 01:22:45.351725 ignition[1071]: INFO : Stage: umount Sep 6 01:22:45.351725 ignition[1071]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:22:45.351725 ignition[1071]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:22:45.351725 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:22:45.351725 ignition[1071]: INFO : umount: umount passed Sep 6 01:22:45.351725 ignition[1071]: INFO : Ignition finished successfully Sep 6 01:22:45.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.244421 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 01:22:45.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.244536 systemd[1]: Stopped dracut-initqueue.service. Sep 6 01:22:45.274321 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 01:22:45.274500 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 01:22:45.283891 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 01:22:45.283987 systemd[1]: Stopped ignition-files.service. Sep 6 01:22:45.292007 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 6 01:22:45.292104 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 6 01:22:45.304961 systemd[1]: Stopping ignition-mount.service... Sep 6 01:22:45.316313 systemd[1]: Stopping iscsid.service... Sep 6 01:22:45.320869 systemd[1]: Stopping sysroot-boot.service... Sep 6 01:22:45.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.324784 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 01:22:45.325004 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 01:22:45.343374 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 01:22:45.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.343486 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 01:22:45.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.349618 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 01:22:45.349746 systemd[1]: Stopped iscsid.service. Sep 6 01:22:45.365343 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 01:22:45.366022 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 01:22:45.366108 systemd[1]: Stopped ignition-mount.service. Sep 6 01:22:45.373359 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 01:22:45.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.373453 systemd[1]: Stopped ignition-disks.service. Sep 6 01:22:45.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.384318 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 01:22:45.603000 audit: BPF prog-id=6 op=UNLOAD Sep 6 01:22:45.384380 systemd[1]: Stopped ignition-kargs.service. Sep 6 01:22:45.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.411575 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 01:22:45.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.411631 systemd[1]: Stopped ignition-fetch.service. Sep 6 01:22:45.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.420185 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 01:22:45.420229 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 01:22:45.429898 systemd[1]: Stopped target paths.target. Sep 6 01:22:45.439600 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 01:22:45.454395 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 01:22:45.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.465599 systemd[1]: Stopped target slices.target. Sep 6 01:22:45.473667 systemd[1]: Stopped target sockets.target. Sep 6 01:22:45.484230 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 01:22:45.714129 kernel: hv_netvsc 002248bd-7571-0022-48bd-7571002248bd eth0: Data path switched from VF: enP24110s1 Sep 6 01:22:45.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.484282 systemd[1]: Closed iscsid.socket. Sep 6 01:22:45.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.496435 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 01:22:45.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.496490 systemd[1]: Stopped ignition-setup.service. Sep 6 01:22:45.504750 systemd[1]: Stopping iscsiuio.service... Sep 6 01:22:45.518074 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 01:22:45.518193 systemd[1]: Stopped iscsiuio.service. Sep 6 01:22:45.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.526150 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 01:22:45.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.526240 systemd[1]: Finished initrd-cleanup.service. Sep 6 01:22:45.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.536561 systemd[1]: Stopped target network.target. Sep 6 01:22:45.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.544700 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 01:22:45.544743 systemd[1]: Closed iscsiuio.socket. Sep 6 01:22:45.553778 systemd[1]: Stopping systemd-networkd.service... Sep 6 01:22:45.560923 systemd[1]: Stopping systemd-resolved.service... Sep 6 01:22:45.570388 systemd-networkd[874]: eth0: DHCPv6 lease lost Sep 6 01:22:45.805000 audit: BPF prog-id=9 op=UNLOAD Sep 6 01:22:45.574935 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 01:22:45.575043 systemd[1]: Stopped systemd-networkd.service. Sep 6 01:22:45.585496 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 01:22:45.585600 systemd[1]: Stopped systemd-resolved.service. Sep 6 01:22:45.595484 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 01:22:45.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.595529 systemd[1]: Closed systemd-networkd.socket. Sep 6 01:22:45.604292 systemd[1]: Stopping network-cleanup.service... Sep 6 01:22:45.613536 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 01:22:45.613605 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 01:22:45.618902 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 01:22:45.618960 systemd[1]: Stopped systemd-sysctl.service. Sep 6 01:22:45.632262 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 01:22:45.632315 systemd[1]: Stopped systemd-modules-load.service. Sep 6 01:22:45.637703 systemd[1]: Stopping systemd-udevd.service... Sep 6 01:22:45.648105 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 01:22:45.658413 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 01:22:45.658577 systemd[1]: Stopped systemd-udevd.service. Sep 6 01:22:45.673429 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 01:22:45.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.673470 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 01:22:45.683143 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 01:22:45.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:45.683196 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 01:22:45.692369 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 01:22:45.692422 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 01:22:45.709835 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 01:22:45.709891 systemd[1]: Stopped dracut-cmdline.service. Sep 6 01:22:45.718633 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 01:22:45.718681 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 01:22:45.732055 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 01:22:45.746197 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 6 01:22:45.991366 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Sep 6 01:22:45.746290 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 6 01:22:45.760079 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 01:22:45.760146 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 01:22:45.764689 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 01:22:45.764735 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 01:22:45.776359 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 6 01:22:45.776863 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 01:22:45.776967 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 01:22:45.825866 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 01:22:45.825979 systemd[1]: Stopped network-cleanup.service. Sep 6 01:22:45.902133 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 01:22:45.902240 systemd[1]: Stopped sysroot-boot.service. Sep 6 01:22:45.907385 systemd[1]: Reached target initrd-switch-root.target. Sep 6 01:22:45.917783 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 01:22:45.917857 systemd[1]: Stopped initrd-setup-root.service. Sep 6 01:22:45.927279 systemd[1]: Starting initrd-switch-root.service... Sep 6 01:22:45.945797 systemd[1]: Switching root. Sep 6 01:22:45.991829 systemd-journald[276]: Journal stopped Sep 6 01:22:57.497558 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 01:22:57.497580 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 01:22:57.497590 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 01:22:57.497600 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 01:22:57.497608 kernel: SELinux: policy capability open_perms=1 Sep 6 01:22:57.497616 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 01:22:57.497625 kernel: SELinux: policy capability always_check_network=0 Sep 6 01:22:57.497632 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 01:22:57.497640 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 01:22:57.497648 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 01:22:57.497656 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 01:22:57.497667 systemd[1]: Successfully loaded SELinux policy in 273.227ms. Sep 6 01:22:57.497677 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.358ms. Sep 6 01:22:57.497687 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 01:22:57.497697 systemd[1]: Detected virtualization microsoft. Sep 6 01:22:57.497707 systemd[1]: Detected architecture arm64. Sep 6 01:22:57.497718 systemd[1]: Detected first boot. Sep 6 01:22:57.497727 systemd[1]: Hostname set to . Sep 6 01:22:57.497736 systemd[1]: Initializing machine ID from random generator. Sep 6 01:22:57.497744 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 01:22:57.497753 kernel: kauditd_printk_skb: 41 callbacks suppressed Sep 6 01:22:57.497762 kernel: audit: type=1400 audit(1757121770.217:89): avc: denied { associate } for pid=1104 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 01:22:57.497774 kernel: audit: type=1300 audit(1757121770.217:89): arch=c00000b7 syscall=5 success=yes exit=0 a0=40000222ac a1=4000028378 a2=40000267c0 a3=32 items=0 ppid=1087 pid=1104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:57.497784 kernel: audit: type=1327 audit(1757121770.217:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 01:22:57.497793 kernel: audit: type=1400 audit(1757121770.232:90): avc: denied { associate } for pid=1104 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 01:22:57.497803 kernel: audit: type=1300 audit(1757121770.232:90): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000022389 a2=1ed a3=0 items=2 ppid=1087 pid=1104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:57.497811 kernel: audit: type=1307 audit(1757121770.232:90): cwd="/" Sep 6 01:22:57.497821 kernel: audit: type=1302 audit(1757121770.232:90): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:57.497830 kernel: audit: type=1302 audit(1757121770.232:90): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:57.497840 kernel: audit: type=1327 audit(1757121770.232:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 01:22:57.497849 systemd[1]: Populated /etc with preset unit settings. Sep 6 01:22:57.497859 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:22:57.497868 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:22:57.497879 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:22:57.497889 kernel: audit: type=1334 audit(1757121776.783:91): prog-id=12 op=LOAD Sep 6 01:22:57.497897 kernel: audit: type=1334 audit(1757121776.783:92): prog-id=3 op=UNLOAD Sep 6 01:22:57.497906 kernel: audit: type=1334 audit(1757121776.783:93): prog-id=13 op=LOAD Sep 6 01:22:57.497916 kernel: audit: type=1334 audit(1757121776.783:94): prog-id=14 op=LOAD Sep 6 01:22:57.497925 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 01:22:57.497934 kernel: audit: type=1334 audit(1757121776.783:95): prog-id=4 op=UNLOAD Sep 6 01:22:57.497945 systemd[1]: Stopped initrd-switch-root.service. Sep 6 01:22:57.497955 kernel: audit: type=1334 audit(1757121776.783:96): prog-id=5 op=UNLOAD Sep 6 01:22:57.497964 kernel: audit: type=1334 audit(1757121776.790:97): prog-id=15 op=LOAD Sep 6 01:22:57.497972 kernel: audit: type=1334 audit(1757121776.790:98): prog-id=12 op=UNLOAD Sep 6 01:22:57.497981 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 01:22:57.497991 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 01:22:57.498000 kernel: audit: type=1334 audit(1757121776.797:99): prog-id=16 op=LOAD Sep 6 01:22:57.498008 kernel: audit: type=1334 audit(1757121776.803:100): prog-id=17 op=LOAD Sep 6 01:22:57.498018 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 01:22:57.498027 systemd[1]: Created slice system-getty.slice. Sep 6 01:22:57.498038 systemd[1]: Created slice system-modprobe.slice. Sep 6 01:22:57.498047 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 01:22:57.498057 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 01:22:57.498066 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 01:22:57.498075 systemd[1]: Created slice user.slice. Sep 6 01:22:57.498085 systemd[1]: Started systemd-ask-password-console.path. Sep 6 01:22:57.498094 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 01:22:57.498103 systemd[1]: Set up automount boot.automount. Sep 6 01:22:57.498113 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 01:22:57.498124 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 01:22:57.498133 systemd[1]: Stopped target initrd-fs.target. Sep 6 01:22:57.498142 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 01:22:57.498151 systemd[1]: Reached target integritysetup.target. Sep 6 01:22:57.498160 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 01:22:57.498170 systemd[1]: Reached target remote-fs.target. Sep 6 01:22:57.498179 systemd[1]: Reached target slices.target. Sep 6 01:22:57.498189 systemd[1]: Reached target swap.target. Sep 6 01:22:57.498198 systemd[1]: Reached target torcx.target. Sep 6 01:22:57.498207 systemd[1]: Reached target veritysetup.target. Sep 6 01:22:57.498217 systemd[1]: Listening on systemd-coredump.socket. Sep 6 01:22:57.498226 systemd[1]: Listening on systemd-initctl.socket. Sep 6 01:22:57.498235 systemd[1]: Listening on systemd-networkd.socket. Sep 6 01:22:57.498246 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 01:22:57.498256 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 01:22:57.498265 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 01:22:57.498275 systemd[1]: Mounting dev-hugepages.mount... Sep 6 01:22:57.498284 systemd[1]: Mounting dev-mqueue.mount... Sep 6 01:22:57.498293 systemd[1]: Mounting media.mount... Sep 6 01:22:57.498302 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 01:22:57.498313 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 01:22:57.498323 systemd[1]: Mounting tmp.mount... Sep 6 01:22:57.498346 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 01:22:57.498356 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:22:57.498366 systemd[1]: Starting kmod-static-nodes.service... Sep 6 01:22:57.498375 systemd[1]: Starting modprobe@configfs.service... Sep 6 01:22:57.498385 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:22:57.498394 systemd[1]: Starting modprobe@drm.service... Sep 6 01:22:57.498404 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:22:57.498413 systemd[1]: Starting modprobe@fuse.service... Sep 6 01:22:57.498424 systemd[1]: Starting modprobe@loop.service... Sep 6 01:22:57.498434 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 01:22:57.498443 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 01:22:57.498453 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 01:22:57.498462 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 01:22:57.498471 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 01:22:57.498481 systemd[1]: Stopped systemd-journald.service. Sep 6 01:22:57.498490 systemd[1]: systemd-journald.service: Consumed 2.953s CPU time. Sep 6 01:22:57.498500 systemd[1]: Starting systemd-journald.service... Sep 6 01:22:57.498510 kernel: fuse: init (API version 7.34) Sep 6 01:22:57.498519 systemd[1]: Starting systemd-modules-load.service... Sep 6 01:22:57.498529 kernel: loop: module loaded Sep 6 01:22:57.498539 systemd[1]: Starting systemd-network-generator.service... Sep 6 01:22:57.498548 systemd[1]: Starting systemd-remount-fs.service... Sep 6 01:22:57.498557 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 01:22:57.498567 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 01:22:57.498576 systemd[1]: Stopped verity-setup.service. Sep 6 01:22:57.498585 systemd[1]: Mounted dev-hugepages.mount. Sep 6 01:22:57.498596 systemd[1]: Mounted dev-mqueue.mount. Sep 6 01:22:57.498605 systemd[1]: Mounted media.mount. Sep 6 01:22:57.498615 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 01:22:57.498628 systemd-journald[1210]: Journal started Sep 6 01:22:57.498673 systemd-journald[1210]: Runtime Journal (/run/log/journal/571a65c71140497183b61ebde3db8fde) is 8.0M, max 78.5M, 70.5M free. Sep 6 01:22:48.113000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 01:22:48.863000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 01:22:48.863000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 01:22:48.863000 audit: BPF prog-id=10 op=LOAD Sep 6 01:22:48.863000 audit: BPF prog-id=10 op=UNLOAD Sep 6 01:22:48.863000 audit: BPF prog-id=11 op=LOAD Sep 6 01:22:48.863000 audit: BPF prog-id=11 op=UNLOAD Sep 6 01:22:50.217000 audit[1104]: AVC avc: denied { associate } for pid=1104 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 01:22:50.217000 audit[1104]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40000222ac a1=4000028378 a2=40000267c0 a3=32 items=0 ppid=1087 pid=1104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:50.217000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 01:22:50.232000 audit[1104]: AVC avc: denied { associate } for pid=1104 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 01:22:50.232000 audit[1104]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000022389 a2=1ed a3=0 items=2 ppid=1087 pid=1104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:50.232000 audit: CWD cwd="/" Sep 6 01:22:50.232000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:50.232000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:50.232000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 01:22:56.783000 audit: BPF prog-id=12 op=LOAD Sep 6 01:22:56.783000 audit: BPF prog-id=3 op=UNLOAD Sep 6 01:22:56.783000 audit: BPF prog-id=13 op=LOAD Sep 6 01:22:56.783000 audit: BPF prog-id=14 op=LOAD Sep 6 01:22:56.783000 audit: BPF prog-id=4 op=UNLOAD Sep 6 01:22:56.783000 audit: BPF prog-id=5 op=UNLOAD Sep 6 01:22:56.790000 audit: BPF prog-id=15 op=LOAD Sep 6 01:22:56.790000 audit: BPF prog-id=12 op=UNLOAD Sep 6 01:22:56.797000 audit: BPF prog-id=16 op=LOAD Sep 6 01:22:56.803000 audit: BPF prog-id=17 op=LOAD Sep 6 01:22:56.803000 audit: BPF prog-id=13 op=UNLOAD Sep 6 01:22:56.803000 audit: BPF prog-id=14 op=UNLOAD Sep 6 01:22:56.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:56.841000 audit: BPF prog-id=15 op=UNLOAD Sep 6 01:22:56.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:56.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.378000 audit: BPF prog-id=18 op=LOAD Sep 6 01:22:57.380000 audit: BPF prog-id=19 op=LOAD Sep 6 01:22:57.380000 audit: BPF prog-id=20 op=LOAD Sep 6 01:22:57.380000 audit: BPF prog-id=16 op=UNLOAD Sep 6 01:22:57.380000 audit: BPF prog-id=17 op=UNLOAD Sep 6 01:22:57.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.495000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 01:22:57.495000 audit[1210]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffffecfec60 a2=4000 a3=1 items=0 ppid=1 pid=1210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:57.495000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 01:22:56.782141 systemd[1]: Queued start job for default target multi-user.target. Sep 6 01:22:50.168732 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:22:56.782154 systemd[1]: Unnecessary job was removed for dev-sda6.device. Sep 6 01:22:50.169045 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:50Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 01:22:56.804552 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 01:22:50.169065 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:50Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 01:22:56.804916 systemd[1]: systemd-journald.service: Consumed 2.953s CPU time. Sep 6 01:22:50.169105 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:50Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 01:22:50.169115 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:50Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 01:22:50.169144 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:50Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 01:22:50.169157 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:50Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 01:22:50.169394 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:50Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 01:22:50.169429 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:50Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 01:22:50.169440 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:50Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 01:22:50.200625 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:50Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 01:22:50.200683 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:50Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 01:22:50.200717 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:50Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 01:22:50.200733 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:50Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 01:22:50.200755 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:50Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 01:22:50.200768 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:50Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 01:22:55.830518 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:55Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 01:22:55.830792 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:55Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 01:22:55.830889 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:55Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 01:22:55.831049 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:55Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 01:22:55.831098 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:55Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 01:22:55.831165 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-06T01:22:55Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 01:22:57.509917 systemd[1]: Started systemd-journald.service. Sep 6 01:22:57.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.510871 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 01:22:57.515663 systemd[1]: Mounted tmp.mount. Sep 6 01:22:57.519765 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 01:22:57.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.524980 systemd[1]: Finished kmod-static-nodes.service. Sep 6 01:22:57.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.530110 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 01:22:57.530251 systemd[1]: Finished modprobe@configfs.service. Sep 6 01:22:57.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.535622 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:22:57.535749 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:22:57.541025 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 01:22:57.541867 systemd[1]: Finished modprobe@drm.service. Sep 6 01:22:57.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.546666 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:22:57.546856 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:22:57.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.552254 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 01:22:57.552405 systemd[1]: Finished modprobe@fuse.service. Sep 6 01:22:57.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.557322 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:22:57.557468 systemd[1]: Finished modprobe@loop.service. Sep 6 01:22:57.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.562196 systemd[1]: Finished systemd-network-generator.service. Sep 6 01:22:57.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.568104 systemd[1]: Finished systemd-remount-fs.service. Sep 6 01:22:57.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.573648 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 01:22:57.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.579311 systemd[1]: Reached target network-pre.target. Sep 6 01:22:57.585626 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 01:22:57.591956 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 01:22:57.596436 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 01:22:57.598215 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 01:22:57.604207 systemd[1]: Starting systemd-journal-flush.service... Sep 6 01:22:57.609093 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:22:57.610531 systemd[1]: Starting systemd-random-seed.service... Sep 6 01:22:57.615048 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:22:57.616385 systemd[1]: Starting systemd-sysusers.service... Sep 6 01:22:57.621903 systemd[1]: Starting systemd-udev-settle.service... Sep 6 01:22:57.628047 systemd[1]: Finished systemd-modules-load.service. Sep 6 01:22:57.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.633736 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 01:22:57.638870 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 01:22:57.646424 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:22:57.653568 udevadm[1222]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 6 01:22:57.683511 systemd[1]: Finished systemd-random-seed.service. Sep 6 01:22:57.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.688585 systemd[1]: Reached target first-boot-complete.target. Sep 6 01:22:57.690228 systemd-journald[1210]: Time spent on flushing to /var/log/journal/571a65c71140497183b61ebde3db8fde is 13.825ms for 1089 entries. Sep 6 01:22:57.690228 systemd-journald[1210]: System Journal (/var/log/journal/571a65c71140497183b61ebde3db8fde) is 8.0M, max 2.6G, 2.6G free. Sep 6 01:22:57.768156 systemd-journald[1210]: Received client request to flush runtime journal. Sep 6 01:22:57.769231 systemd[1]: Finished systemd-journal-flush.service. Sep 6 01:22:57.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:57.777855 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:22:57.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:58.245788 systemd[1]: Finished systemd-sysusers.service. Sep 6 01:22:58.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:58.252085 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 01:22:58.639964 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 01:22:58.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:59.005281 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 01:22:59.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:59.010000 audit: BPF prog-id=21 op=LOAD Sep 6 01:22:59.010000 audit: BPF prog-id=22 op=LOAD Sep 6 01:22:59.010000 audit: BPF prog-id=7 op=UNLOAD Sep 6 01:22:59.010000 audit: BPF prog-id=8 op=UNLOAD Sep 6 01:22:59.011959 systemd[1]: Starting systemd-udevd.service... Sep 6 01:22:59.031138 systemd-udevd[1229]: Using default interface naming scheme 'v252'. Sep 6 01:22:59.177282 systemd[1]: Started systemd-udevd.service. Sep 6 01:22:59.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:59.187000 audit: BPF prog-id=23 op=LOAD Sep 6 01:22:59.190095 systemd[1]: Starting systemd-networkd.service... Sep 6 01:22:59.229353 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Sep 6 01:22:59.265017 systemd[1]: Starting systemd-userdbd.service... Sep 6 01:22:59.263000 audit: BPF prog-id=24 op=LOAD Sep 6 01:22:59.263000 audit: BPF prog-id=25 op=LOAD Sep 6 01:22:59.263000 audit: BPF prog-id=26 op=LOAD Sep 6 01:22:59.291367 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 01:22:59.319000 audit[1230]: AVC avc: denied { confidentiality } for pid=1230 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 01:22:59.332188 kernel: hv_vmbus: registering driver hv_balloon Sep 6 01:22:59.332320 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 6 01:22:59.332384 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 6 01:22:59.343391 kernel: hv_vmbus: registering driver hyperv_fb Sep 6 01:22:59.355544 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 6 01:22:59.355647 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 6 01:22:59.356778 systemd[1]: Started systemd-userdbd.service. Sep 6 01:22:59.361376 kernel: Console: switching to colour dummy device 80x25 Sep 6 01:22:59.364357 kernel: Console: switching to colour frame buffer device 128x48 Sep 6 01:22:59.379453 kernel: hv_utils: Registering HyperV Utility Driver Sep 6 01:22:59.379564 kernel: hv_vmbus: registering driver hv_utils Sep 6 01:22:59.379591 kernel: hv_utils: Heartbeat IC version 3.0 Sep 6 01:22:59.430390 kernel: hv_utils: Shutdown IC version 3.2 Sep 6 01:22:59.430508 kernel: hv_utils: TimeSync IC version 4.0 Sep 6 01:22:59.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:59.319000 audit[1230]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaaea04f6b0 a1=aa2c a2=ffff805f24b0 a3=aaaae9fb1010 items=12 ppid=1229 pid=1230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:22:59.319000 audit: CWD cwd="/" Sep 6 01:22:59.319000 audit: PATH item=0 name=(null) inode=6875 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:59.319000 audit: PATH item=1 name=(null) inode=10664 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:59.319000 audit: PATH item=2 name=(null) inode=10664 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:59.319000 audit: PATH item=3 name=(null) inode=10665 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:59.319000 audit: PATH item=4 name=(null) inode=10664 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:59.319000 audit: PATH item=5 name=(null) inode=10666 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:59.319000 audit: PATH item=6 name=(null) inode=10664 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:59.319000 audit: PATH item=7 name=(null) inode=10667 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:59.319000 audit: PATH item=8 name=(null) inode=10664 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:59.319000 audit: PATH item=9 name=(null) inode=10668 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:59.319000 audit: PATH item=10 name=(null) inode=10664 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:59.319000 audit: PATH item=11 name=(null) inode=10669 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:22:59.319000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 01:22:59.639563 systemd-networkd[1250]: lo: Link UP Sep 6 01:22:59.639577 systemd-networkd[1250]: lo: Gained carrier Sep 6 01:22:59.640036 systemd-networkd[1250]: Enumeration completed Sep 6 01:22:59.640141 systemd[1]: Started systemd-networkd.service. Sep 6 01:22:59.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:59.646502 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 01:22:59.653400 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 01:22:59.659434 systemd[1]: Finished systemd-udev-settle.service. Sep 6 01:22:59.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:59.665646 systemd[1]: Starting lvm2-activation-early.service... Sep 6 01:22:59.666902 systemd-networkd[1250]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:22:59.719762 kernel: mlx5_core 5e2e:00:02.0 enP24110s1: Link up Sep 6 01:22:59.746768 kernel: hv_netvsc 002248bd-7571-0022-48bd-7571002248bd eth0: Data path switched to VF: enP24110s1 Sep 6 01:22:59.746761 systemd-networkd[1250]: enP24110s1: Link UP Sep 6 01:22:59.746859 systemd-networkd[1250]: eth0: Link UP Sep 6 01:22:59.746863 systemd-networkd[1250]: eth0: Gained carrier Sep 6 01:22:59.752024 systemd-networkd[1250]: enP24110s1: Gained carrier Sep 6 01:22:59.763869 systemd-networkd[1250]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 6 01:22:59.940940 lvm[1306]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 01:22:59.983777 systemd[1]: Finished lvm2-activation-early.service. Sep 6 01:22:59.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:22:59.989360 systemd[1]: Reached target cryptsetup.target. Sep 6 01:22:59.995628 systemd[1]: Starting lvm2-activation.service... Sep 6 01:23:00.000344 lvm[1307]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 01:23:00.027803 systemd[1]: Finished lvm2-activation.service. Sep 6 01:23:00.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.032704 systemd[1]: Reached target local-fs-pre.target. Sep 6 01:23:00.037686 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 01:23:00.037719 systemd[1]: Reached target local-fs.target. Sep 6 01:23:00.042429 systemd[1]: Reached target machines.target. Sep 6 01:23:00.048431 systemd[1]: Starting ldconfig.service... Sep 6 01:23:00.052539 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:23:00.052617 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:23:00.053968 systemd[1]: Starting systemd-boot-update.service... Sep 6 01:23:00.059798 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 01:23:00.066764 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 01:23:00.073659 systemd[1]: Starting systemd-sysext.service... Sep 6 01:23:00.112626 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1309 (bootctl) Sep 6 01:23:00.113981 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 01:23:00.152504 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 01:23:00.164563 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 01:23:00.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.217579 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 01:23:00.217782 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 01:23:00.235003 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 01:23:00.235673 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 01:23:00.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.268780 kernel: loop0: detected capacity change from 0 to 207008 Sep 6 01:23:00.315760 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 01:23:00.334838 kernel: loop1: detected capacity change from 0 to 207008 Sep 6 01:23:00.344394 (sd-sysext)[1321]: Using extensions 'kubernetes'. Sep 6 01:23:00.345703 (sd-sysext)[1321]: Merged extensions into '/usr'. Sep 6 01:23:00.361386 systemd-fsck[1317]: fsck.fat 4.2 (2021-01-31) Sep 6 01:23:00.361386 systemd-fsck[1317]: /dev/sda1: 236 files, 117310/258078 clusters Sep 6 01:23:00.364550 systemd[1]: Mounting usr-share-oem.mount... Sep 6 01:23:00.372264 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:23:00.373775 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:23:00.380484 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:23:00.387951 systemd[1]: Starting modprobe@loop.service... Sep 6 01:23:00.392403 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:23:00.392572 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:23:00.395297 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 01:23:00.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.402383 systemd[1]: Mounted usr-share-oem.mount. Sep 6 01:23:00.407333 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:23:00.407478 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:23:00.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.412582 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:23:00.412713 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:23:00.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.418167 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:23:00.418297 systemd[1]: Finished modprobe@loop.service. Sep 6 01:23:00.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.426471 systemd[1]: Mounting boot.mount... Sep 6 01:23:00.430094 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:23:00.430168 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:23:00.430922 systemd[1]: Finished systemd-sysext.service. Sep 6 01:23:00.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.437956 systemd[1]: Starting ensure-sysext.service... Sep 6 01:23:00.443233 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 01:23:00.452343 systemd[1]: Mounted boot.mount. Sep 6 01:23:00.458010 systemd[1]: Reloading. Sep 6 01:23:00.474184 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 01:23:00.493911 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 01:23:00.510929 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 01:23:00.529180 /usr/lib/systemd/system-generators/torcx-generator[1354]: time="2025-09-06T01:23:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:23:00.529213 /usr/lib/systemd/system-generators/torcx-generator[1354]: time="2025-09-06T01:23:00Z" level=info msg="torcx already run" Sep 6 01:23:00.596047 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:23:00.596235 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:23:00.612080 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:23:00.677000 audit: BPF prog-id=27 op=LOAD Sep 6 01:23:00.677000 audit: BPF prog-id=24 op=UNLOAD Sep 6 01:23:00.677000 audit: BPF prog-id=28 op=LOAD Sep 6 01:23:00.677000 audit: BPF prog-id=29 op=LOAD Sep 6 01:23:00.678000 audit: BPF prog-id=25 op=UNLOAD Sep 6 01:23:00.678000 audit: BPF prog-id=26 op=UNLOAD Sep 6 01:23:00.678000 audit: BPF prog-id=30 op=LOAD Sep 6 01:23:00.678000 audit: BPF prog-id=31 op=LOAD Sep 6 01:23:00.678000 audit: BPF prog-id=21 op=UNLOAD Sep 6 01:23:00.678000 audit: BPF prog-id=22 op=UNLOAD Sep 6 01:23:00.680000 audit: BPF prog-id=32 op=LOAD Sep 6 01:23:00.680000 audit: BPF prog-id=18 op=UNLOAD Sep 6 01:23:00.680000 audit: BPF prog-id=33 op=LOAD Sep 6 01:23:00.680000 audit: BPF prog-id=34 op=LOAD Sep 6 01:23:00.680000 audit: BPF prog-id=19 op=UNLOAD Sep 6 01:23:00.680000 audit: BPF prog-id=20 op=UNLOAD Sep 6 01:23:00.682000 audit: BPF prog-id=35 op=LOAD Sep 6 01:23:00.682000 audit: BPF prog-id=23 op=UNLOAD Sep 6 01:23:00.690030 systemd[1]: Finished systemd-boot-update.service. Sep 6 01:23:00.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.703968 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:23:00.705703 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:23:00.711676 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:23:00.717907 systemd[1]: Starting modprobe@loop.service... Sep 6 01:23:00.721954 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:23:00.722094 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:23:00.722985 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:23:00.723130 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:23:00.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.728778 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:23:00.728909 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:23:00.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.735453 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:23:00.735591 systemd[1]: Finished modprobe@loop.service. Sep 6 01:23:00.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.741940 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:23:00.743335 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:23:00.748892 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:23:00.754492 systemd[1]: Starting modprobe@loop.service... Sep 6 01:23:00.758654 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:23:00.758903 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:23:00.759756 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:23:00.759906 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:23:00.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.765009 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:23:00.765144 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:23:00.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.770566 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:23:00.770705 systemd[1]: Finished modprobe@loop.service. Sep 6 01:23:00.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.778376 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:23:00.779878 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:23:00.785656 systemd[1]: Starting modprobe@drm.service... Sep 6 01:23:00.791308 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:23:00.797269 systemd[1]: Starting modprobe@loop.service... Sep 6 01:23:00.801533 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:23:00.801680 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:23:00.802686 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:23:00.802853 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:23:00.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.808067 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 01:23:00.808203 systemd[1]: Finished modprobe@drm.service. Sep 6 01:23:00.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.813021 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:23:00.813156 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:23:00.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.818542 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:23:00.818669 systemd[1]: Finished modprobe@loop.service. Sep 6 01:23:00.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.824492 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:23:00.824566 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:23:00.825965 systemd[1]: Finished ensure-sysext.service. Sep 6 01:23:00.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.961361 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 01:23:00.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:00.968203 systemd[1]: Starting audit-rules.service... Sep 6 01:23:00.973562 systemd[1]: Starting clean-ca-certificates.service... Sep 6 01:23:00.979771 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 01:23:00.986000 audit: BPF prog-id=36 op=LOAD Sep 6 01:23:00.988549 systemd[1]: Starting systemd-resolved.service... Sep 6 01:23:00.993000 audit: BPF prog-id=37 op=LOAD Sep 6 01:23:00.995595 systemd[1]: Starting systemd-timesyncd.service... Sep 6 01:23:01.000975 systemd[1]: Starting systemd-update-utmp.service... Sep 6 01:23:01.013131 systemd[1]: Finished clean-ca-certificates.service. Sep 6 01:23:01.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:01.018718 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 01:23:01.086000 audit[1428]: SYSTEM_BOOT pid=1428 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 01:23:01.090639 systemd[1]: Finished systemd-update-utmp.service. Sep 6 01:23:01.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:01.100359 systemd[1]: Started systemd-timesyncd.service. Sep 6 01:23:01.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:01.105835 systemd[1]: Reached target time-set.target. Sep 6 01:23:01.162139 systemd-resolved[1426]: Positive Trust Anchors: Sep 6 01:23:01.162545 systemd-resolved[1426]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 01:23:01.162624 systemd-resolved[1426]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 01:23:01.164887 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 01:23:01.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:01.231774 systemd-resolved[1426]: Using system hostname 'ci-3510.3.8-n-9a681c3ae9'. Sep 6 01:23:01.233600 systemd[1]: Started systemd-resolved.service. Sep 6 01:23:01.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:23:01.238593 systemd[1]: Reached target network.target. Sep 6 01:23:01.243241 systemd[1]: Reached target nss-lookup.target. Sep 6 01:23:01.403000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 01:23:01.403000 audit[1443]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe01e32f0 a2=420 a3=0 items=0 ppid=1422 pid=1443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:23:01.403000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 01:23:01.419005 augenrules[1443]: No rules Sep 6 01:23:01.420027 systemd[1]: Finished audit-rules.service. Sep 6 01:23:01.447976 systemd-timesyncd[1427]: Contacted time server 207.58.172.126:123 (0.flatcar.pool.ntp.org). Sep 6 01:23:01.448054 systemd-timesyncd[1427]: Initial clock synchronization to Sat 2025-09-06 01:23:01.445250 UTC. Sep 6 01:23:01.545932 systemd-networkd[1250]: eth0: Gained IPv6LL Sep 6 01:23:01.548250 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 01:23:01.554062 systemd[1]: Reached target network-online.target. Sep 6 01:23:07.556538 ldconfig[1308]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 01:23:07.567428 systemd[1]: Finished ldconfig.service. Sep 6 01:23:07.573953 systemd[1]: Starting systemd-update-done.service... Sep 6 01:23:07.614886 systemd[1]: Finished systemd-update-done.service. Sep 6 01:23:07.620054 systemd[1]: Reached target sysinit.target. Sep 6 01:23:07.624849 systemd[1]: Started motdgen.path. Sep 6 01:23:07.629061 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 01:23:07.636133 systemd[1]: Started logrotate.timer. Sep 6 01:23:07.640281 systemd[1]: Started mdadm.timer. Sep 6 01:23:07.643984 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 01:23:07.648965 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 01:23:07.649003 systemd[1]: Reached target paths.target. Sep 6 01:23:07.653492 systemd[1]: Reached target timers.target. Sep 6 01:23:07.658779 systemd[1]: Listening on dbus.socket. Sep 6 01:23:07.664591 systemd[1]: Starting docker.socket... Sep 6 01:23:07.670973 systemd[1]: Listening on sshd.socket. Sep 6 01:23:07.675993 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:23:07.676554 systemd[1]: Listening on docker.socket. Sep 6 01:23:07.681071 systemd[1]: Reached target sockets.target. Sep 6 01:23:07.685621 systemd[1]: Reached target basic.target. Sep 6 01:23:07.690078 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 01:23:07.690110 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 01:23:07.691433 systemd[1]: Starting containerd.service... Sep 6 01:23:07.697066 systemd[1]: Starting dbus.service... Sep 6 01:23:07.702209 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 01:23:07.708367 systemd[1]: Starting extend-filesystems.service... Sep 6 01:23:07.712984 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 01:23:07.714479 systemd[1]: Starting kubelet.service... Sep 6 01:23:07.719692 systemd[1]: Starting motdgen.service... Sep 6 01:23:07.724760 systemd[1]: Started nvidia.service. Sep 6 01:23:07.731055 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 01:23:07.737594 systemd[1]: Starting sshd-keygen.service... Sep 6 01:23:07.743854 systemd[1]: Starting systemd-logind.service... Sep 6 01:23:07.748877 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:23:07.748954 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 01:23:07.749452 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 01:23:07.750364 systemd[1]: Starting update-engine.service... Sep 6 01:23:07.756109 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 01:23:07.768098 jq[1453]: false Sep 6 01:23:07.769144 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 01:23:07.769327 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 01:23:07.777661 jq[1470]: true Sep 6 01:23:07.795695 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 01:23:07.795879 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 01:23:07.805854 extend-filesystems[1454]: Found loop1 Sep 6 01:23:07.810476 extend-filesystems[1454]: Found sda Sep 6 01:23:07.810476 extend-filesystems[1454]: Found sda1 Sep 6 01:23:07.810476 extend-filesystems[1454]: Found sda2 Sep 6 01:23:07.810476 extend-filesystems[1454]: Found sda3 Sep 6 01:23:07.810476 extend-filesystems[1454]: Found usr Sep 6 01:23:07.810476 extend-filesystems[1454]: Found sda4 Sep 6 01:23:07.810476 extend-filesystems[1454]: Found sda6 Sep 6 01:23:07.810476 extend-filesystems[1454]: Found sda7 Sep 6 01:23:07.810476 extend-filesystems[1454]: Found sda9 Sep 6 01:23:07.810476 extend-filesystems[1454]: Checking size of /dev/sda9 Sep 6 01:23:07.888127 jq[1474]: true Sep 6 01:23:07.829594 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 01:23:07.829795 systemd[1]: Finished motdgen.service. Sep 6 01:23:07.877656 systemd-logind[1467]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 01:23:07.880167 systemd-logind[1467]: New seat seat0. Sep 6 01:23:07.903337 extend-filesystems[1454]: Old size kept for /dev/sda9 Sep 6 01:23:07.914644 extend-filesystems[1454]: Found sr0 Sep 6 01:23:07.909261 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 01:23:07.922336 env[1476]: time="2025-09-06T01:23:07.919547449Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 01:23:07.909438 systemd[1]: Finished extend-filesystems.service. Sep 6 01:23:07.964276 env[1476]: time="2025-09-06T01:23:07.964222520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 01:23:07.964804 env[1476]: time="2025-09-06T01:23:07.964772016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:23:07.972017 env[1476]: time="2025-09-06T01:23:07.971970970Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:23:07.972147 env[1476]: time="2025-09-06T01:23:07.972130151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:23:07.972727 env[1476]: time="2025-09-06T01:23:07.972699164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:23:07.973601 env[1476]: time="2025-09-06T01:23:07.973578581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 01:23:07.973692 env[1476]: time="2025-09-06T01:23:07.973675610Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 01:23:07.973773 env[1476]: time="2025-09-06T01:23:07.973758440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 01:23:07.973919 env[1476]: time="2025-09-06T01:23:07.973903063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:23:07.974201 env[1476]: time="2025-09-06T01:23:07.974178391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:23:07.975299 env[1476]: time="2025-09-06T01:23:07.975271862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:23:07.975384 env[1476]: time="2025-09-06T01:23:07.975369491Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 01:23:07.975505 env[1476]: time="2025-09-06T01:23:07.975485877Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 01:23:07.975583 env[1476]: time="2025-09-06T01:23:07.975569067Z" level=info msg="metadata content store policy set" policy=shared Sep 6 01:23:07.998880 env[1476]: time="2025-09-06T01:23:07.997478333Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 01:23:07.998880 env[1476]: time="2025-09-06T01:23:07.997539806Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 01:23:07.998880 env[1476]: time="2025-09-06T01:23:07.997556444Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 01:23:07.998880 env[1476]: time="2025-09-06T01:23:07.997681509Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 01:23:07.998880 env[1476]: time="2025-09-06T01:23:07.997699987Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 01:23:07.998880 env[1476]: time="2025-09-06T01:23:07.997714345Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 01:23:07.998880 env[1476]: time="2025-09-06T01:23:07.997726904Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 01:23:07.998880 env[1476]: time="2025-09-06T01:23:07.998138775Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 01:23:07.998880 env[1476]: time="2025-09-06T01:23:07.998158773Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 01:23:07.998880 env[1476]: time="2025-09-06T01:23:07.998171492Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 01:23:07.998880 env[1476]: time="2025-09-06T01:23:07.998184130Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 01:23:07.998880 env[1476]: time="2025-09-06T01:23:07.998206328Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 01:23:07.998880 env[1476]: time="2025-09-06T01:23:07.998381267Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 01:23:07.998880 env[1476]: time="2025-09-06T01:23:07.998473136Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 01:23:07.999245 env[1476]: time="2025-09-06T01:23:07.998847092Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 01:23:07.999524 env[1476]: time="2025-09-06T01:23:07.999306478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 01:23:07.999524 env[1476]: time="2025-09-06T01:23:07.999333915Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 01:23:07.999524 env[1476]: time="2025-09-06T01:23:07.999404067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 01:23:07.999524 env[1476]: time="2025-09-06T01:23:07.999419545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 01:23:07.999524 env[1476]: time="2025-09-06T01:23:07.999432383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 01:23:07.999524 env[1476]: time="2025-09-06T01:23:07.999502215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 01:23:07.999934 env[1476]: time="2025-09-06T01:23:07.999675715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 01:23:07.999934 env[1476]: time="2025-09-06T01:23:07.999697592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 01:23:07.999934 env[1476]: time="2025-09-06T01:23:07.999711871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 01:23:07.999934 env[1476]: time="2025-09-06T01:23:07.999723349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 01:23:08.000142 env[1476]: time="2025-09-06T01:23:08.000072708Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 01:23:08.000347 env[1476]: time="2025-09-06T01:23:08.000328558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 01:23:08.000444 env[1476]: time="2025-09-06T01:23:08.000429306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 01:23:08.000516 env[1476]: time="2025-09-06T01:23:08.000502658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 01:23:08.000585 env[1476]: time="2025-09-06T01:23:08.000561011Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 01:23:08.000669 env[1476]: time="2025-09-06T01:23:08.000638362Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 01:23:08.001260 env[1476]: time="2025-09-06T01:23:08.001243454Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 01:23:08.002069 env[1476]: time="2025-09-06T01:23:08.002043926Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 01:23:08.002219 env[1476]: time="2025-09-06T01:23:08.002202429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 01:23:08.002570 env[1476]: time="2025-09-06T01:23:08.002519314Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 01:23:08.020478 env[1476]: time="2025-09-06T01:23:08.004895092Z" level=info msg="Connect containerd service" Sep 6 01:23:08.020478 env[1476]: time="2025-09-06T01:23:08.004981963Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 01:23:08.020478 env[1476]: time="2025-09-06T01:23:08.005981093Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 01:23:08.020478 env[1476]: time="2025-09-06T01:23:08.006259102Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 01:23:08.020478 env[1476]: time="2025-09-06T01:23:08.006306217Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 01:23:08.020478 env[1476]: time="2025-09-06T01:23:08.011304386Z" level=info msg="containerd successfully booted in 0.094753s" Sep 6 01:23:08.020478 env[1476]: time="2025-09-06T01:23:08.011330183Z" level=info msg="Start subscribing containerd event" Sep 6 01:23:08.020478 env[1476]: time="2025-09-06T01:23:08.011398216Z" level=info msg="Start recovering state" Sep 6 01:23:08.020478 env[1476]: time="2025-09-06T01:23:08.011469288Z" level=info msg="Start event monitor" Sep 6 01:23:08.020478 env[1476]: time="2025-09-06T01:23:08.011484886Z" level=info msg="Start snapshots syncer" Sep 6 01:23:08.020478 env[1476]: time="2025-09-06T01:23:08.011496245Z" level=info msg="Start cni network conf syncer for default" Sep 6 01:23:08.020478 env[1476]: time="2025-09-06T01:23:08.011504964Z" level=info msg="Start streaming server" Sep 6 01:23:08.006440 systemd[1]: Started containerd.service. Sep 6 01:23:08.022778 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Sep 6 01:23:08.023518 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 01:23:08.060395 systemd[1]: nvidia.service: Deactivated successfully. Sep 6 01:23:08.102270 dbus-daemon[1452]: [system] SELinux support is enabled Sep 6 01:23:08.102472 systemd[1]: Started dbus.service. Sep 6 01:23:08.108368 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 01:23:08.108390 systemd[1]: Reached target system-config.target. Sep 6 01:23:08.117858 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 01:23:08.117883 systemd[1]: Reached target user-config.target. Sep 6 01:23:08.126496 systemd[1]: Started systemd-logind.service. Sep 6 01:23:08.448201 update_engine[1469]: I0906 01:23:08.432473 1469 main.cc:92] Flatcar Update Engine starting Sep 6 01:23:08.604936 systemd[1]: Started update-engine.service. Sep 6 01:23:08.605393 update_engine[1469]: I0906 01:23:08.605001 1469 update_check_scheduler.cc:74] Next update check in 5m45s Sep 6 01:23:08.615071 systemd[1]: Started locksmithd.service. Sep 6 01:23:08.713665 systemd[1]: Started kubelet.service. Sep 6 01:23:09.155311 kubelet[1556]: E0906 01:23:09.155205 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:23:09.156939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:23:09.157076 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:23:09.975352 locksmithd[1553]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 01:23:10.392254 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 01:23:10.411305 systemd[1]: Finished sshd-keygen.service. Sep 6 01:23:10.417987 systemd[1]: Starting issuegen.service... Sep 6 01:23:10.423715 systemd[1]: Started waagent.service. Sep 6 01:23:10.428607 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 01:23:10.428831 systemd[1]: Finished issuegen.service. Sep 6 01:23:10.434719 systemd[1]: Starting systemd-user-sessions.service... Sep 6 01:23:10.512857 systemd[1]: Finished systemd-user-sessions.service. Sep 6 01:23:10.520401 systemd[1]: Started getty@tty1.service. Sep 6 01:23:10.527005 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 6 01:23:10.532356 systemd[1]: Reached target getty.target. Sep 6 01:23:10.536629 systemd[1]: Reached target multi-user.target. Sep 6 01:23:10.543601 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 01:23:10.552468 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 01:23:10.552633 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 01:23:10.559822 systemd[1]: Startup finished in 751ms (kernel) + 13.011s (initrd) + 22.865s (userspace) = 36.629s. Sep 6 01:23:11.226593 login[1580]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Sep 6 01:23:11.228446 login[1581]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 6 01:23:11.283982 systemd[1]: Created slice user-500.slice. Sep 6 01:23:11.285204 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 01:23:11.287887 systemd-logind[1467]: New session 2 of user core. Sep 6 01:23:11.325571 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 01:23:11.327223 systemd[1]: Starting user@500.service... Sep 6 01:23:11.360827 (systemd)[1584]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:11.536025 systemd[1584]: Queued start job for default target default.target. Sep 6 01:23:11.536537 systemd[1584]: Reached target paths.target. Sep 6 01:23:11.536558 systemd[1584]: Reached target sockets.target. Sep 6 01:23:11.536569 systemd[1584]: Reached target timers.target. Sep 6 01:23:11.536579 systemd[1584]: Reached target basic.target. Sep 6 01:23:11.536621 systemd[1584]: Reached target default.target. Sep 6 01:23:11.536643 systemd[1584]: Startup finished in 169ms. Sep 6 01:23:11.536698 systemd[1]: Started user@500.service. Sep 6 01:23:11.537670 systemd[1]: Started session-2.scope. Sep 6 01:23:12.226966 login[1580]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 6 01:23:12.231632 systemd[1]: Started session-1.scope. Sep 6 01:23:12.232034 systemd-logind[1467]: New session 1 of user core. Sep 6 01:23:17.569724 waagent[1578]: 2025-09-06T01:23:17.569601Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Sep 6 01:23:17.576985 waagent[1578]: 2025-09-06T01:23:17.576884Z INFO Daemon Daemon OS: flatcar 3510.3.8 Sep 6 01:23:17.581909 waagent[1578]: 2025-09-06T01:23:17.581826Z INFO Daemon Daemon Python: 3.9.16 Sep 6 01:23:17.587028 waagent[1578]: 2025-09-06T01:23:17.586902Z INFO Daemon Daemon Run daemon Sep 6 01:23:17.591606 waagent[1578]: 2025-09-06T01:23:17.591521Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Sep 6 01:23:17.609685 waagent[1578]: 2025-09-06T01:23:17.609529Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 6 01:23:17.625681 waagent[1578]: 2025-09-06T01:23:17.625520Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 6 01:23:17.635819 waagent[1578]: 2025-09-06T01:23:17.635710Z INFO Daemon Daemon cloud-init is enabled: False Sep 6 01:23:17.641089 waagent[1578]: 2025-09-06T01:23:17.641004Z INFO Daemon Daemon Using waagent for provisioning Sep 6 01:23:17.647124 waagent[1578]: 2025-09-06T01:23:17.647044Z INFO Daemon Daemon Activate resource disk Sep 6 01:23:17.652233 waagent[1578]: 2025-09-06T01:23:17.652153Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 6 01:23:17.667053 waagent[1578]: 2025-09-06T01:23:17.666963Z INFO Daemon Daemon Found device: None Sep 6 01:23:17.671818 waagent[1578]: 2025-09-06T01:23:17.671710Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 6 01:23:17.680605 waagent[1578]: 2025-09-06T01:23:17.680520Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 6 01:23:17.693085 waagent[1578]: 2025-09-06T01:23:17.693007Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 6 01:23:17.699131 waagent[1578]: 2025-09-06T01:23:17.699048Z INFO Daemon Daemon Running default provisioning handler Sep 6 01:23:17.713189 waagent[1578]: 2025-09-06T01:23:17.713029Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 6 01:23:17.729297 waagent[1578]: 2025-09-06T01:23:17.729130Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 6 01:23:17.740159 waagent[1578]: 2025-09-06T01:23:17.740053Z INFO Daemon Daemon cloud-init is enabled: False Sep 6 01:23:17.745683 waagent[1578]: 2025-09-06T01:23:17.745582Z INFO Daemon Daemon Copying ovf-env.xml Sep 6 01:23:17.824437 waagent[1578]: 2025-09-06T01:23:17.824240Z INFO Daemon Daemon Successfully mounted dvd Sep 6 01:23:17.903502 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 6 01:23:17.940664 waagent[1578]: 2025-09-06T01:23:17.940500Z INFO Daemon Daemon Detect protocol endpoint Sep 6 01:23:17.945852 waagent[1578]: 2025-09-06T01:23:17.945761Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 6 01:23:17.951858 waagent[1578]: 2025-09-06T01:23:17.951772Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 6 01:23:17.958500 waagent[1578]: 2025-09-06T01:23:17.958417Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 6 01:23:17.964118 waagent[1578]: 2025-09-06T01:23:17.964040Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 6 01:23:17.969547 waagent[1578]: 2025-09-06T01:23:17.969471Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 6 01:23:18.077417 waagent[1578]: 2025-09-06T01:23:18.077283Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 6 01:23:18.085371 waagent[1578]: 2025-09-06T01:23:18.085314Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 6 01:23:18.091031 waagent[1578]: 2025-09-06T01:23:18.090935Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 6 01:23:18.640205 waagent[1578]: 2025-09-06T01:23:18.640051Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 6 01:23:18.656170 waagent[1578]: 2025-09-06T01:23:18.656078Z INFO Daemon Daemon Forcing an update of the goal state.. Sep 6 01:23:18.662968 waagent[1578]: 2025-09-06T01:23:18.662871Z INFO Daemon Daemon Fetching goal state [incarnation 1] Sep 6 01:23:18.758610 waagent[1578]: 2025-09-06T01:23:18.758474Z INFO Daemon Daemon Found private key matching thumbprint AC16A0C42C38B033FFB6F838CA6C5611E40FC9FD Sep 6 01:23:18.767632 waagent[1578]: 2025-09-06T01:23:18.767531Z INFO Daemon Daemon Fetch goal state completed Sep 6 01:23:18.821096 waagent[1578]: 2025-09-06T01:23:18.821033Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 7bd6ebcd-b986-4b88-8827-554cecaa912c New eTag: 2860253854208231182] Sep 6 01:23:18.832277 waagent[1578]: 2025-09-06T01:23:18.832178Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Sep 6 01:23:18.884237 waagent[1578]: 2025-09-06T01:23:18.884158Z INFO Daemon Daemon Starting provisioning Sep 6 01:23:18.889609 waagent[1578]: 2025-09-06T01:23:18.889507Z INFO Daemon Daemon Handle ovf-env.xml. Sep 6 01:23:18.894613 waagent[1578]: 2025-09-06T01:23:18.894459Z INFO Daemon Daemon Set hostname [ci-3510.3.8-n-9a681c3ae9] Sep 6 01:23:18.951112 waagent[1578]: 2025-09-06T01:23:18.950970Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-n-9a681c3ae9] Sep 6 01:23:18.958107 waagent[1578]: 2025-09-06T01:23:18.958004Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 6 01:23:18.965331 waagent[1578]: 2025-09-06T01:23:18.965236Z INFO Daemon Daemon Primary interface is [eth0] Sep 6 01:23:18.983383 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Sep 6 01:23:18.983568 systemd[1]: Stopped systemd-networkd-wait-online.service. Sep 6 01:23:18.983629 systemd[1]: Stopping systemd-networkd-wait-online.service... Sep 6 01:23:18.983942 systemd[1]: Stopping systemd-networkd.service... Sep 6 01:23:18.989807 systemd-networkd[1250]: eth0: DHCPv6 lease lost Sep 6 01:23:18.991803 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 01:23:18.992005 systemd[1]: Stopped systemd-networkd.service. Sep 6 01:23:18.994312 systemd[1]: Starting systemd-networkd.service... Sep 6 01:23:19.025724 systemd-networkd[1626]: enP24110s1: Link UP Sep 6 01:23:19.025811 systemd-networkd[1626]: enP24110s1: Gained carrier Sep 6 01:23:19.026858 systemd-networkd[1626]: eth0: Link UP Sep 6 01:23:19.026870 systemd-networkd[1626]: eth0: Gained carrier Sep 6 01:23:19.027222 systemd-networkd[1626]: lo: Link UP Sep 6 01:23:19.027232 systemd-networkd[1626]: lo: Gained carrier Sep 6 01:23:19.027486 systemd-networkd[1626]: eth0: Gained IPv6LL Sep 6 01:23:19.028048 systemd-networkd[1626]: Enumeration completed Sep 6 01:23:19.028170 systemd[1]: Started systemd-networkd.service. Sep 6 01:23:19.034801 waagent[1578]: 2025-09-06T01:23:19.031384Z INFO Daemon Daemon Create user account if not exists Sep 6 01:23:19.030109 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 01:23:19.038517 waagent[1578]: 2025-09-06T01:23:19.038404Z INFO Daemon Daemon User core already exists, skip useradd Sep 6 01:23:19.040577 systemd-networkd[1626]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:23:19.044821 waagent[1578]: 2025-09-06T01:23:19.044666Z INFO Daemon Daemon Configure sudoer Sep 6 01:23:19.050116 waagent[1578]: 2025-09-06T01:23:19.050021Z INFO Daemon Daemon Configure sshd Sep 6 01:23:19.054606 waagent[1578]: 2025-09-06T01:23:19.054516Z INFO Daemon Daemon Deploy ssh public key. Sep 6 01:23:19.072873 systemd-networkd[1626]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 6 01:23:19.077969 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 01:23:19.407875 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 01:23:19.408042 systemd[1]: Stopped kubelet.service. Sep 6 01:23:19.409515 systemd[1]: Starting kubelet.service... Sep 6 01:23:19.507290 systemd[1]: Started kubelet.service. Sep 6 01:23:19.663022 kubelet[1636]: E0906 01:23:19.662903 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:23:19.665844 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:23:19.665968 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:23:20.309783 waagent[1578]: 2025-09-06T01:23:20.309491Z INFO Daemon Daemon Provisioning complete Sep 6 01:23:20.327861 waagent[1578]: 2025-09-06T01:23:20.327788Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 6 01:23:20.334911 waagent[1578]: 2025-09-06T01:23:20.334817Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 6 01:23:20.346002 waagent[1578]: 2025-09-06T01:23:20.345914Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Sep 6 01:23:20.665267 waagent[1641]: 2025-09-06T01:23:20.665102Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Sep 6 01:23:20.666461 waagent[1641]: 2025-09-06T01:23:20.666381Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:23:20.666765 waagent[1641]: 2025-09-06T01:23:20.666686Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:23:20.680171 waagent[1641]: 2025-09-06T01:23:20.680075Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Sep 6 01:23:20.680533 waagent[1641]: 2025-09-06T01:23:20.680481Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Sep 6 01:23:20.744023 waagent[1641]: 2025-09-06T01:23:20.743881Z INFO ExtHandler ExtHandler Found private key matching thumbprint AC16A0C42C38B033FFB6F838CA6C5611E40FC9FD Sep 6 01:23:20.744516 waagent[1641]: 2025-09-06T01:23:20.744458Z INFO ExtHandler ExtHandler Fetch goal state completed Sep 6 01:23:20.759453 waagent[1641]: 2025-09-06T01:23:20.759389Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 5c6e96ba-1036-4b11-a6cc-5ae974738310 New eTag: 2860253854208231182] Sep 6 01:23:20.760286 waagent[1641]: 2025-09-06T01:23:20.760218Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Sep 6 01:23:20.817436 waagent[1641]: 2025-09-06T01:23:20.817279Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 6 01:23:20.843011 waagent[1641]: 2025-09-06T01:23:20.842895Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1641 Sep 6 01:23:20.847504 waagent[1641]: 2025-09-06T01:23:20.847417Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 6 01:23:20.849197 waagent[1641]: 2025-09-06T01:23:20.849127Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 6 01:23:20.982221 waagent[1641]: 2025-09-06T01:23:20.982099Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 6 01:23:20.982838 waagent[1641]: 2025-09-06T01:23:20.982775Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 6 01:23:20.991994 waagent[1641]: 2025-09-06T01:23:20.991931Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 6 01:23:20.992794 waagent[1641]: 2025-09-06T01:23:20.992698Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 6 01:23:20.994206 waagent[1641]: 2025-09-06T01:23:20.994130Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Sep 6 01:23:20.996142 waagent[1641]: 2025-09-06T01:23:20.996065Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 6 01:23:20.996387 waagent[1641]: 2025-09-06T01:23:20.996306Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:23:20.997238 waagent[1641]: 2025-09-06T01:23:20.997169Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:23:20.997932 waagent[1641]: 2025-09-06T01:23:20.997860Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 6 01:23:20.998300 waagent[1641]: 2025-09-06T01:23:20.998232Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 6 01:23:20.998300 waagent[1641]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 6 01:23:20.998300 waagent[1641]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 6 01:23:20.998300 waagent[1641]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 6 01:23:20.998300 waagent[1641]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:23:20.998300 waagent[1641]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:23:20.998300 waagent[1641]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:23:21.000937 waagent[1641]: 2025-09-06T01:23:21.000697Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 6 01:23:21.001371 waagent[1641]: 2025-09-06T01:23:21.001285Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:23:21.002231 waagent[1641]: 2025-09-06T01:23:21.002139Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:23:21.003405 waagent[1641]: 2025-09-06T01:23:21.003314Z INFO EnvHandler ExtHandler Configure routes Sep 6 01:23:21.003597 waagent[1641]: 2025-09-06T01:23:21.003539Z INFO EnvHandler ExtHandler Gateway:None Sep 6 01:23:21.003725 waagent[1641]: 2025-09-06T01:23:21.003678Z INFO EnvHandler ExtHandler Routes:None Sep 6 01:23:21.005197 waagent[1641]: 2025-09-06T01:23:21.005146Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 6 01:23:21.005643 waagent[1641]: 2025-09-06T01:23:21.005033Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 6 01:23:21.005997 waagent[1641]: 2025-09-06T01:23:21.005889Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 6 01:23:21.006184 waagent[1641]: 2025-09-06T01:23:21.006106Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 6 01:23:21.006995 waagent[1641]: 2025-09-06T01:23:21.006915Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 6 01:23:21.019709 waagent[1641]: 2025-09-06T01:23:21.019624Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Sep 6 01:23:21.021264 waagent[1641]: 2025-09-06T01:23:21.021190Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 6 01:23:21.023353 waagent[1641]: 2025-09-06T01:23:21.023278Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Sep 6 01:23:21.047177 waagent[1641]: 2025-09-06T01:23:21.047022Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1626' Sep 6 01:23:21.075357 waagent[1641]: 2025-09-06T01:23:21.075284Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Sep 6 01:23:21.147046 waagent[1641]: 2025-09-06T01:23:21.146881Z INFO MonitorHandler ExtHandler Network interfaces: Sep 6 01:23:21.147046 waagent[1641]: Executing ['ip', '-a', '-o', 'link']: Sep 6 01:23:21.147046 waagent[1641]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 6 01:23:21.147046 waagent[1641]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bd:75:71 brd ff:ff:ff:ff:ff:ff Sep 6 01:23:21.147046 waagent[1641]: 3: enP24110s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bd:75:71 brd ff:ff:ff:ff:ff:ff\ altname enP24110p0s2 Sep 6 01:23:21.147046 waagent[1641]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 6 01:23:21.147046 waagent[1641]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 6 01:23:21.147046 waagent[1641]: 2: eth0 inet 10.200.20.4/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 6 01:23:21.147046 waagent[1641]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 6 01:23:21.147046 waagent[1641]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 6 01:23:21.147046 waagent[1641]: 2: eth0 inet6 fe80::222:48ff:febd:7571/64 scope link \ valid_lft forever preferred_lft forever Sep 6 01:23:21.466028 waagent[1641]: 2025-09-06T01:23:21.465875Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Sep 6 01:23:21.474804 waagent[1641]: 2025-09-06T01:23:21.474692Z INFO EnvHandler ExtHandler Firewall rules: Sep 6 01:23:21.474804 waagent[1641]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:23:21.474804 waagent[1641]: pkts bytes target prot opt in out source destination Sep 6 01:23:21.474804 waagent[1641]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:23:21.474804 waagent[1641]: pkts bytes target prot opt in out source destination Sep 6 01:23:21.474804 waagent[1641]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:23:21.474804 waagent[1641]: pkts bytes target prot opt in out source destination Sep 6 01:23:21.474804 waagent[1641]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 6 01:23:21.474804 waagent[1641]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 6 01:23:21.478080 waagent[1641]: 2025-09-06T01:23:21.477993Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.14.0.1 -- exiting Sep 6 01:23:22.351103 waagent[1578]: 2025-09-06T01:23:22.350953Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Sep 6 01:23:22.357112 waagent[1578]: 2025-09-06T01:23:22.357034Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.14.0.1 to be the latest agent Sep 6 01:23:23.689770 waagent[1680]: 2025-09-06T01:23:23.689630Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.14.0.1) Sep 6 01:23:23.690968 waagent[1680]: 2025-09-06T01:23:23.690895Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Sep 6 01:23:23.691251 waagent[1680]: 2025-09-06T01:23:23.691200Z INFO ExtHandler ExtHandler Python: 3.9.16 Sep 6 01:23:23.691494 waagent[1680]: 2025-09-06T01:23:23.691445Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 6 01:23:23.707678 waagent[1680]: 2025-09-06T01:23:23.707523Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 6 01:23:23.708415 waagent[1680]: 2025-09-06T01:23:23.708352Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:23:23.708709 waagent[1680]: 2025-09-06T01:23:23.708658Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:23:23.709122 waagent[1680]: 2025-09-06T01:23:23.709064Z INFO ExtHandler ExtHandler Initializing the goal state... Sep 6 01:23:23.724497 waagent[1680]: 2025-09-06T01:23:23.724382Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 6 01:23:23.739465 waagent[1680]: 2025-09-06T01:23:23.739387Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 6 01:23:23.740940 waagent[1680]: 2025-09-06T01:23:23.740871Z INFO ExtHandler Sep 6 01:23:23.741273 waagent[1680]: 2025-09-06T01:23:23.741221Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 790ac7ed-ad27-4a19-9272-d1afb0ab441c eTag: 2860253854208231182 source: Fabric] Sep 6 01:23:23.742268 waagent[1680]: 2025-09-06T01:23:23.742207Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 6 01:23:23.743772 waagent[1680]: 2025-09-06T01:23:23.743683Z INFO ExtHandler Sep 6 01:23:23.744032 waagent[1680]: 2025-09-06T01:23:23.743980Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 6 01:23:23.751972 waagent[1680]: 2025-09-06T01:23:23.751904Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 6 01:23:23.752820 waagent[1680]: 2025-09-06T01:23:23.752763Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 6 01:23:23.773794 waagent[1680]: 2025-09-06T01:23:23.773699Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Sep 6 01:23:23.846484 waagent[1680]: 2025-09-06T01:23:23.846342Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AC16A0C42C38B033FFB6F838CA6C5611E40FC9FD', 'hasPrivateKey': True} Sep 6 01:23:23.848318 waagent[1680]: 2025-09-06T01:23:23.848241Z INFO ExtHandler Fetch goal state from WireServer completed Sep 6 01:23:23.849506 waagent[1680]: 2025-09-06T01:23:23.849430Z INFO ExtHandler ExtHandler Goal state initialization completed. Sep 6 01:23:23.869253 waagent[1680]: 2025-09-06T01:23:23.869128Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Sep 6 01:23:23.879132 waagent[1680]: 2025-09-06T01:23:23.879002Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 6 01:23:23.883944 waagent[1680]: 2025-09-06T01:23:23.883821Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Sep 6 01:23:23.884381 waagent[1680]: 2025-09-06T01:23:23.884325Z INFO ExtHandler ExtHandler Checking state of the firewall Sep 6 01:23:23.933090 waagent[1680]: 2025-09-06T01:23:23.932945Z WARNING ExtHandler ExtHandler The firewall rules for Azure Fabric are not setup correctly (the environment thread will fix it): The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Current state: Sep 6 01:23:23.933090 waagent[1680]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:23:23.933090 waagent[1680]: pkts bytes target prot opt in out source destination Sep 6 01:23:23.933090 waagent[1680]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:23:23.933090 waagent[1680]: pkts bytes target prot opt in out source destination Sep 6 01:23:23.933090 waagent[1680]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:23:23.933090 waagent[1680]: pkts bytes target prot opt in out source destination Sep 6 01:23:23.933090 waagent[1680]: 54 7805 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 6 01:23:23.933090 waagent[1680]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 6 01:23:23.937185 waagent[1680]: 2025-09-06T01:23:23.937108Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Sep 6 01:23:23.941084 waagent[1680]: 2025-09-06T01:23:23.940887Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Sep 6 01:23:23.941564 waagent[1680]: 2025-09-06T01:23:23.941507Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 6 01:23:23.942174 waagent[1680]: 2025-09-06T01:23:23.942112Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 6 01:23:23.951463 waagent[1680]: 2025-09-06T01:23:23.951384Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 6 01:23:23.952152 waagent[1680]: 2025-09-06T01:23:23.952086Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 6 01:23:23.961219 waagent[1680]: 2025-09-06T01:23:23.961130Z INFO ExtHandler ExtHandler WALinuxAgent-2.14.0.1 running as process 1680 Sep 6 01:23:23.964940 waagent[1680]: 2025-09-06T01:23:23.964844Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 6 01:23:23.965968 waagent[1680]: 2025-09-06T01:23:23.965901Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Sep 6 01:23:23.967000 waagent[1680]: 2025-09-06T01:23:23.966935Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 6 01:23:23.970014 waagent[1680]: 2025-09-06T01:23:23.969941Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Sep 6 01:23:23.970396 waagent[1680]: 2025-09-06T01:23:23.970342Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 6 01:23:23.972034 waagent[1680]: 2025-09-06T01:23:23.971949Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 6 01:23:23.972624 waagent[1680]: 2025-09-06T01:23:23.972552Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:23:23.973035 waagent[1680]: 2025-09-06T01:23:23.972980Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:23:23.973817 waagent[1680]: 2025-09-06T01:23:23.973714Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 6 01:23:23.974335 waagent[1680]: 2025-09-06T01:23:23.974273Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 6 01:23:23.974335 waagent[1680]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 6 01:23:23.974335 waagent[1680]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 6 01:23:23.974335 waagent[1680]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 6 01:23:23.974335 waagent[1680]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:23:23.974335 waagent[1680]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:23:23.974335 waagent[1680]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:23:23.977320 waagent[1680]: 2025-09-06T01:23:23.977184Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 6 01:23:23.978271 waagent[1680]: 2025-09-06T01:23:23.978199Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:23:23.981164 waagent[1680]: 2025-09-06T01:23:23.981003Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:23:23.982275 waagent[1680]: 2025-09-06T01:23:23.982199Z INFO EnvHandler ExtHandler Configure routes Sep 6 01:23:23.982650 waagent[1680]: 2025-09-06T01:23:23.982592Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 6 01:23:23.982831 waagent[1680]: 2025-09-06T01:23:23.982461Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 6 01:23:23.984266 waagent[1680]: 2025-09-06T01:23:23.984191Z INFO EnvHandler ExtHandler Gateway:None Sep 6 01:23:23.984576 waagent[1680]: 2025-09-06T01:23:23.984519Z INFO EnvHandler ExtHandler Routes:None Sep 6 01:23:23.985240 waagent[1680]: 2025-09-06T01:23:23.985156Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 6 01:23:23.985657 waagent[1680]: 2025-09-06T01:23:23.985591Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 6 01:23:23.994212 waagent[1680]: 2025-09-06T01:23:23.993877Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 6 01:23:24.003508 waagent[1680]: 2025-09-06T01:23:24.003419Z INFO MonitorHandler ExtHandler Network interfaces: Sep 6 01:23:24.003508 waagent[1680]: Executing ['ip', '-a', '-o', 'link']: Sep 6 01:23:24.003508 waagent[1680]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 6 01:23:24.003508 waagent[1680]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bd:75:71 brd ff:ff:ff:ff:ff:ff Sep 6 01:23:24.003508 waagent[1680]: 3: enP24110s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bd:75:71 brd ff:ff:ff:ff:ff:ff\ altname enP24110p0s2 Sep 6 01:23:24.003508 waagent[1680]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 6 01:23:24.003508 waagent[1680]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 6 01:23:24.003508 waagent[1680]: 2: eth0 inet 10.200.20.4/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 6 01:23:24.003508 waagent[1680]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 6 01:23:24.003508 waagent[1680]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 6 01:23:24.003508 waagent[1680]: 2: eth0 inet6 fe80::222:48ff:febd:7571/64 scope link \ valid_lft forever preferred_lft forever Sep 6 01:23:24.013408 waagent[1680]: 2025-09-06T01:23:24.013300Z INFO ExtHandler ExtHandler Downloading agent manifest Sep 6 01:23:24.031422 waagent[1680]: 2025-09-06T01:23:24.031320Z INFO ExtHandler ExtHandler Sep 6 01:23:24.032729 waagent[1680]: 2025-09-06T01:23:24.032646Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 6d16695c-98c8-4475-9e59-3f1c15d3b07f correlation e86ae225-080a-4160-b11e-93561b6c78c3 created: 2025-09-06T01:21:45.807899Z] Sep 6 01:23:24.037035 waagent[1680]: 2025-09-06T01:23:24.036947Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 6 01:23:24.042306 waagent[1680]: 2025-09-06T01:23:24.042218Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 10 ms] Sep 6 01:23:24.059972 waagent[1680]: 2025-09-06T01:23:24.059869Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 6 01:23:24.081565 waagent[1680]: 2025-09-06T01:23:24.081388Z INFO ExtHandler ExtHandler Looking for existing remote access users. Sep 6 01:23:24.089474 waagent[1680]: 2025-09-06T01:23:24.089389Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.14.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 487F0978-6A7B-43F2-8F6E-D55CE4E317CC;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Sep 6 01:23:24.090463 waagent[1680]: 2025-09-06T01:23:24.090394Z WARNING EnvHandler ExtHandler The firewall is not configured correctly. The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Will reset it. Current state: Sep 6 01:23:24.090463 waagent[1680]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:23:24.090463 waagent[1680]: pkts bytes target prot opt in out source destination Sep 6 01:23:24.090463 waagent[1680]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:23:24.090463 waagent[1680]: pkts bytes target prot opt in out source destination Sep 6 01:23:24.090463 waagent[1680]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:23:24.090463 waagent[1680]: pkts bytes target prot opt in out source destination Sep 6 01:23:24.090463 waagent[1680]: 100 16174 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 6 01:23:24.090463 waagent[1680]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 6 01:23:24.158086 waagent[1680]: 2025-09-06T01:23:24.157940Z INFO EnvHandler ExtHandler The firewall was setup successfully: Sep 6 01:23:24.158086 waagent[1680]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:23:24.158086 waagent[1680]: pkts bytes target prot opt in out source destination Sep 6 01:23:24.158086 waagent[1680]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:23:24.158086 waagent[1680]: pkts bytes target prot opt in out source destination Sep 6 01:23:24.158086 waagent[1680]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:23:24.158086 waagent[1680]: pkts bytes target prot opt in out source destination Sep 6 01:23:24.158086 waagent[1680]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 6 01:23:24.158086 waagent[1680]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 6 01:23:24.158086 waagent[1680]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 6 01:23:24.160154 waagent[1680]: 2025-09-06T01:23:24.160091Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 6 01:23:29.835317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 01:23:29.835495 systemd[1]: Stopped kubelet.service. Sep 6 01:23:29.836920 systemd[1]: Starting kubelet.service... Sep 6 01:23:29.930933 systemd[1]: Started kubelet.service. Sep 6 01:23:30.044871 kubelet[1730]: E0906 01:23:30.044804 1730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:23:30.047400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:23:30.047529 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:23:32.518905 systemd[1]: Created slice system-sshd.slice. Sep 6 01:23:32.520566 systemd[1]: Started sshd@0-10.200.20.4:22-10.200.16.10:39224.service. Sep 6 01:23:33.202243 sshd[1736]: Accepted publickey for core from 10.200.16.10 port 39224 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:33.220153 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:33.224924 systemd[1]: Started session-3.scope. Sep 6 01:23:33.225816 systemd-logind[1467]: New session 3 of user core. Sep 6 01:23:33.589068 systemd[1]: Started sshd@1-10.200.20.4:22-10.200.16.10:39236.service. Sep 6 01:23:34.016504 sshd[1741]: Accepted publickey for core from 10.200.16.10 port 39236 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:34.019121 sshd[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:34.023616 systemd[1]: Started session-4.scope. Sep 6 01:23:34.024161 systemd-logind[1467]: New session 4 of user core. Sep 6 01:23:34.353580 sshd[1741]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:34.356663 systemd[1]: sshd@1-10.200.20.4:22-10.200.16.10:39236.service: Deactivated successfully. Sep 6 01:23:34.357412 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 01:23:34.358003 systemd-logind[1467]: Session 4 logged out. Waiting for processes to exit. Sep 6 01:23:34.359076 systemd-logind[1467]: Removed session 4. Sep 6 01:23:34.424076 systemd[1]: Started sshd@2-10.200.20.4:22-10.200.16.10:39242.service. Sep 6 01:23:34.853075 sshd[1747]: Accepted publickey for core from 10.200.16.10 port 39242 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:34.854417 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:34.858520 systemd-logind[1467]: New session 5 of user core. Sep 6 01:23:34.859001 systemd[1]: Started session-5.scope. Sep 6 01:23:35.182214 sshd[1747]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:35.185502 systemd-logind[1467]: Session 5 logged out. Waiting for processes to exit. Sep 6 01:23:35.185970 systemd[1]: sshd@2-10.200.20.4:22-10.200.16.10:39242.service: Deactivated successfully. Sep 6 01:23:35.186621 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 01:23:35.187302 systemd-logind[1467]: Removed session 5. Sep 6 01:23:35.252900 systemd[1]: Started sshd@3-10.200.20.4:22-10.200.16.10:39258.service. Sep 6 01:23:35.683491 sshd[1753]: Accepted publickey for core from 10.200.16.10 port 39258 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:35.684886 sshd[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:35.688915 systemd-logind[1467]: New session 6 of user core. Sep 6 01:23:35.689366 systemd[1]: Started session-6.scope. Sep 6 01:23:36.016159 sshd[1753]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:36.019381 systemd-logind[1467]: Session 6 logged out. Waiting for processes to exit. Sep 6 01:23:36.020133 systemd[1]: sshd@3-10.200.20.4:22-10.200.16.10:39258.service: Deactivated successfully. Sep 6 01:23:36.020827 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 01:23:36.021496 systemd-logind[1467]: Removed session 6. Sep 6 01:23:36.093278 systemd[1]: Started sshd@4-10.200.20.4:22-10.200.16.10:39262.service. Sep 6 01:23:36.562379 sshd[1759]: Accepted publickey for core from 10.200.16.10 port 39262 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:36.564152 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:36.567796 systemd-logind[1467]: New session 7 of user core. Sep 6 01:23:36.568635 systemd[1]: Started session-7.scope. Sep 6 01:23:37.161947 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 01:23:37.162185 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 01:23:37.191716 systemd[1]: Starting coreos-metadata.service... Sep 6 01:23:37.270492 coreos-metadata[1766]: Sep 06 01:23:37.270 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 6 01:23:37.273828 coreos-metadata[1766]: Sep 06 01:23:37.273 INFO Fetch successful Sep 6 01:23:37.274011 coreos-metadata[1766]: Sep 06 01:23:37.273 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 6 01:23:37.276029 coreos-metadata[1766]: Sep 06 01:23:37.275 INFO Fetch successful Sep 6 01:23:37.276333 coreos-metadata[1766]: Sep 06 01:23:37.276 INFO Fetching http://168.63.129.16/machine/0ce3556a-17d1-44cc-b833-d6ee6df60778/25f41cab%2D7992%2D4cf8%2D8b0b%2D041b8f203229.%5Fci%2D3510.3.8%2Dn%2D9a681c3ae9?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 6 01:23:37.278556 coreos-metadata[1766]: Sep 06 01:23:37.278 INFO Fetch successful Sep 6 01:23:37.315876 coreos-metadata[1766]: Sep 06 01:23:37.315 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 6 01:23:37.326835 coreos-metadata[1766]: Sep 06 01:23:37.326 INFO Fetch successful Sep 6 01:23:37.337968 systemd[1]: Finished coreos-metadata.service. Sep 6 01:23:37.816386 systemd[1]: Stopped kubelet.service. Sep 6 01:23:37.818572 systemd[1]: Starting kubelet.service... Sep 6 01:23:37.868484 systemd[1]: Reloading. Sep 6 01:23:37.966142 /usr/lib/systemd/system-generators/torcx-generator[1819]: time="2025-09-06T01:23:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:23:37.968848 /usr/lib/systemd/system-generators/torcx-generator[1819]: time="2025-09-06T01:23:37Z" level=info msg="torcx already run" Sep 6 01:23:38.035262 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:23:38.035284 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:23:38.051811 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:23:38.150869 systemd[1]: Started kubelet.service. Sep 6 01:23:38.155961 systemd[1]: Stopping kubelet.service... Sep 6 01:23:38.157045 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 01:23:38.157244 systemd[1]: Stopped kubelet.service. Sep 6 01:23:38.159126 systemd[1]: Starting kubelet.service... Sep 6 01:23:38.337542 systemd[1]: Started kubelet.service. Sep 6 01:23:38.555543 kubelet[1885]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:23:38.555969 kubelet[1885]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 01:23:38.556029 kubelet[1885]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:23:38.556204 kubelet[1885]: I0906 01:23:38.556173 1885 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 01:23:39.158943 kubelet[1885]: I0906 01:23:39.158901 1885 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 6 01:23:39.159181 kubelet[1885]: I0906 01:23:39.159169 1885 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 01:23:39.159528 kubelet[1885]: I0906 01:23:39.159510 1885 server.go:954] "Client rotation is on, will bootstrap in background" Sep 6 01:23:39.184828 kubelet[1885]: I0906 01:23:39.184784 1885 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 01:23:39.195882 kubelet[1885]: E0906 01:23:39.195819 1885 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 01:23:39.195882 kubelet[1885]: I0906 01:23:39.195877 1885 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 01:23:39.199208 kubelet[1885]: I0906 01:23:39.199181 1885 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 01:23:39.201123 kubelet[1885]: I0906 01:23:39.201060 1885 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 01:23:39.201439 kubelet[1885]: I0906 01:23:39.201253 1885 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.20.4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 01:23:39.201590 kubelet[1885]: I0906 01:23:39.201576 1885 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 01:23:39.201675 kubelet[1885]: I0906 01:23:39.201664 1885 container_manager_linux.go:304] "Creating device plugin manager" Sep 6 01:23:39.201907 kubelet[1885]: I0906 01:23:39.201892 1885 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:23:39.205548 kubelet[1885]: I0906 01:23:39.205521 1885 kubelet.go:446] "Attempting to sync node with API server" Sep 6 01:23:39.206102 kubelet[1885]: I0906 01:23:39.206084 1885 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 01:23:39.206224 kubelet[1885]: I0906 01:23:39.206214 1885 kubelet.go:352] "Adding apiserver pod source" Sep 6 01:23:39.206432 kubelet[1885]: I0906 01:23:39.206421 1885 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 01:23:39.206580 kubelet[1885]: E0906 01:23:39.206370 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:39.206650 kubelet[1885]: E0906 01:23:39.206325 1885 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:39.210512 kubelet[1885]: I0906 01:23:39.210480 1885 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 01:23:39.211020 kubelet[1885]: I0906 01:23:39.210998 1885 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 01:23:39.211075 kubelet[1885]: W0906 01:23:39.211066 1885 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 01:23:39.211642 kubelet[1885]: I0906 01:23:39.211617 1885 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 01:23:39.211670 kubelet[1885]: I0906 01:23:39.211657 1885 server.go:1287] "Started kubelet" Sep 6 01:23:39.212334 kubelet[1885]: I0906 01:23:39.212284 1885 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 01:23:39.213747 kubelet[1885]: I0906 01:23:39.213712 1885 server.go:479] "Adding debug handlers to kubelet server" Sep 6 01:23:39.223010 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 01:23:39.223235 kubelet[1885]: I0906 01:23:39.223204 1885 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 01:23:39.223833 kubelet[1885]: I0906 01:23:39.223762 1885 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 01:23:39.224141 kubelet[1885]: I0906 01:23:39.224122 1885 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 01:23:39.228178 kubelet[1885]: E0906 01:23:39.228059 1885 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.20.4.18628cfc2532a5f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.20.4,UID:10.200.20.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.200.20.4,},FirstTimestamp:2025-09-06 01:23:39.211638261 +0000 UTC m=+0.868969004,LastTimestamp:2025-09-06 01:23:39.211638261 +0000 UTC m=+0.868969004,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.20.4,}" Sep 6 01:23:39.229236 kubelet[1885]: I0906 01:23:39.229209 1885 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 01:23:39.233106 kubelet[1885]: E0906 01:23:39.233080 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.4\" not found" Sep 6 01:23:39.233277 kubelet[1885]: I0906 01:23:39.233264 1885 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 01:23:39.233570 kubelet[1885]: I0906 01:23:39.233549 1885 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 01:23:39.233708 kubelet[1885]: I0906 01:23:39.233697 1885 reconciler.go:26] "Reconciler: start to sync state" Sep 6 01:23:39.234256 kubelet[1885]: E0906 01:23:39.234229 1885 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 01:23:39.236081 kubelet[1885]: I0906 01:23:39.236050 1885 factory.go:221] Registration of the systemd container factory successfully Sep 6 01:23:39.236351 kubelet[1885]: I0906 01:23:39.236327 1885 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 01:23:39.244253 kubelet[1885]: E0906 01:23:39.244205 1885 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.20.4\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Sep 6 01:23:39.244551 kubelet[1885]: W0906 01:23:39.244526 1885 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Sep 6 01:23:39.244657 kubelet[1885]: E0906 01:23:39.244637 1885 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 6 01:23:39.244808 kubelet[1885]: W0906 01:23:39.244789 1885 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.200.20.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Sep 6 01:23:39.244919 kubelet[1885]: E0906 01:23:39.244905 1885 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.200.20.4\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 6 01:23:39.245190 kubelet[1885]: W0906 01:23:39.245157 1885 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Sep 6 01:23:39.245295 kubelet[1885]: E0906 01:23:39.245278 1885 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Sep 6 01:23:39.245468 kubelet[1885]: E0906 01:23:39.245389 1885 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.20.4.18628cfc268b317c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.20.4,UID:10.200.20.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.200.20.4,},FirstTimestamp:2025-09-06 01:23:39.234218364 +0000 UTC m=+0.891549107,LastTimestamp:2025-09-06 01:23:39.234218364 +0000 UTC m=+0.891549107,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.20.4,}" Sep 6 01:23:39.245859 kubelet[1885]: I0906 01:23:39.245837 1885 factory.go:221] Registration of the containerd container factory successfully Sep 6 01:23:39.271265 kubelet[1885]: I0906 01:23:39.271235 1885 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 01:23:39.271564 kubelet[1885]: I0906 01:23:39.271551 1885 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 01:23:39.271653 kubelet[1885]: I0906 01:23:39.271642 1885 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:23:39.277613 kubelet[1885]: I0906 01:23:39.277584 1885 policy_none.go:49] "None policy: Start" Sep 6 01:23:39.277829 kubelet[1885]: I0906 01:23:39.277815 1885 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 01:23:39.277908 kubelet[1885]: I0906 01:23:39.277897 1885 state_mem.go:35] "Initializing new in-memory state store" Sep 6 01:23:39.287050 systemd[1]: Created slice kubepods.slice. Sep 6 01:23:39.292565 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 01:23:39.295964 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 01:23:39.303818 kubelet[1885]: I0906 01:23:39.303783 1885 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 01:23:39.304123 kubelet[1885]: I0906 01:23:39.304108 1885 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 01:23:39.304245 kubelet[1885]: I0906 01:23:39.304207 1885 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 01:23:39.306249 kubelet[1885]: I0906 01:23:39.306223 1885 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 01:23:39.309352 kubelet[1885]: E0906 01:23:39.308961 1885 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 01:23:39.309352 kubelet[1885]: E0906 01:23:39.309008 1885 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.20.4\" not found" Sep 6 01:23:39.310836 kubelet[1885]: I0906 01:23:39.310797 1885 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 01:23:39.312352 kubelet[1885]: I0906 01:23:39.312323 1885 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 01:23:39.312501 kubelet[1885]: I0906 01:23:39.312487 1885 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 6 01:23:39.312715 kubelet[1885]: I0906 01:23:39.312701 1885 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 01:23:39.313043 kubelet[1885]: I0906 01:23:39.313028 1885 kubelet.go:2382] "Starting kubelet main sync loop" Sep 6 01:23:39.313180 kubelet[1885]: E0906 01:23:39.313163 1885 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 6 01:23:39.405675 kubelet[1885]: I0906 01:23:39.405644 1885 kubelet_node_status.go:75] "Attempting to register node" node="10.200.20.4" Sep 6 01:23:39.412911 kubelet[1885]: I0906 01:23:39.412782 1885 kubelet_node_status.go:78] "Successfully registered node" node="10.200.20.4" Sep 6 01:23:39.412911 kubelet[1885]: E0906 01:23:39.412823 1885 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.200.20.4\": node \"10.200.20.4\" not found" Sep 6 01:23:39.438171 kubelet[1885]: E0906 01:23:39.438128 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.4\" not found" Sep 6 01:23:39.538651 kubelet[1885]: E0906 01:23:39.538616 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.4\" not found" Sep 6 01:23:39.639412 kubelet[1885]: E0906 01:23:39.639370 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.4\" not found" Sep 6 01:23:39.740409 kubelet[1885]: E0906 01:23:39.740290 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.4\" not found" Sep 6 01:23:39.840982 kubelet[1885]: E0906 01:23:39.840931 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.4\" not found" Sep 6 01:23:39.868450 sudo[1762]: pam_unix(sudo:session): session closed for user root Sep 6 01:23:39.941564 kubelet[1885]: E0906 01:23:39.941519 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.4\" not found" Sep 6 01:23:39.969990 sshd[1759]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:39.972589 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 01:23:39.973411 systemd[1]: sshd@4-10.200.20.4:22-10.200.16.10:39262.service: Deactivated successfully. Sep 6 01:23:39.973668 systemd-logind[1467]: Session 7 logged out. Waiting for processes to exit. Sep 6 01:23:39.974688 systemd-logind[1467]: Removed session 7. Sep 6 01:23:40.042178 kubelet[1885]: E0906 01:23:40.042051 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.4\" not found" Sep 6 01:23:40.142684 kubelet[1885]: E0906 01:23:40.142646 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.4\" not found" Sep 6 01:23:40.161855 kubelet[1885]: I0906 01:23:40.161815 1885 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Sep 6 01:23:40.162139 kubelet[1885]: W0906 01:23:40.162035 1885 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 6 01:23:40.207483 kubelet[1885]: E0906 01:23:40.207434 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:40.243522 kubelet[1885]: E0906 01:23:40.243480 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.4\" not found" Sep 6 01:23:40.343705 kubelet[1885]: E0906 01:23:40.343672 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.4\" not found" Sep 6 01:23:40.444504 kubelet[1885]: E0906 01:23:40.444461 1885 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.4\" not found" Sep 6 01:23:40.545711 kubelet[1885]: I0906 01:23:40.545679 1885 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Sep 6 01:23:40.546370 env[1476]: time="2025-09-06T01:23:40.546260849Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 01:23:40.546678 kubelet[1885]: I0906 01:23:40.546498 1885 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Sep 6 01:23:41.208345 kubelet[1885]: E0906 01:23:41.208300 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:41.208345 kubelet[1885]: I0906 01:23:41.208311 1885 apiserver.go:52] "Watching apiserver" Sep 6 01:23:41.217508 systemd[1]: Created slice kubepods-besteffort-pod770aaaa3_45ae_4115_b2d2_1549f4a94cb6.slice. Sep 6 01:23:41.238429 systemd[1]: Created slice kubepods-burstable-pod38fbebce_5f6f_4b59_8987_206f45f67155.slice. Sep 6 01:23:41.252877 kubelet[1885]: I0906 01:23:41.252840 1885 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 01:23:41.253091 kubelet[1885]: I0906 01:23:41.253048 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-cilium-run\") pod \"cilium-xpmvw\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " pod="kube-system/cilium-xpmvw" Sep 6 01:23:41.354194 kubelet[1885]: I0906 01:23:41.354150 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/770aaaa3-45ae-4115-b2d2-1549f4a94cb6-kube-proxy\") pod \"kube-proxy-mcwww\" (UID: \"770aaaa3-45ae-4115-b2d2-1549f4a94cb6\") " pod="kube-system/kube-proxy-mcwww" Sep 6 01:23:41.354430 kubelet[1885]: I0906 01:23:41.354414 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-cni-path\") pod \"cilium-xpmvw\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " pod="kube-system/cilium-xpmvw" Sep 6 01:23:41.354511 kubelet[1885]: I0906 01:23:41.354498 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38fbebce-5f6f-4b59-8987-206f45f67155-clustermesh-secrets\") pod \"cilium-xpmvw\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " pod="kube-system/cilium-xpmvw" Sep 6 01:23:41.354841 kubelet[1885]: I0906 01:23:41.354818 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38fbebce-5f6f-4b59-8987-206f45f67155-cilium-config-path\") pod \"cilium-xpmvw\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " pod="kube-system/cilium-xpmvw" Sep 6 01:23:41.354945 kubelet[1885]: I0906 01:23:41.354931 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-host-proc-sys-net\") pod \"cilium-xpmvw\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " pod="kube-system/cilium-xpmvw" Sep 6 01:23:41.355052 kubelet[1885]: I0906 01:23:41.355039 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38fbebce-5f6f-4b59-8987-206f45f67155-hubble-tls\") pod \"cilium-xpmvw\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " pod="kube-system/cilium-xpmvw" Sep 6 01:23:41.355137 kubelet[1885]: I0906 01:23:41.355122 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg25t\" (UniqueName: \"kubernetes.io/projected/38fbebce-5f6f-4b59-8987-206f45f67155-kube-api-access-lg25t\") pod \"cilium-xpmvw\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " pod="kube-system/cilium-xpmvw" Sep 6 01:23:41.355213 kubelet[1885]: I0906 01:23:41.355201 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/770aaaa3-45ae-4115-b2d2-1549f4a94cb6-lib-modules\") pod \"kube-proxy-mcwww\" (UID: \"770aaaa3-45ae-4115-b2d2-1549f4a94cb6\") " pod="kube-system/kube-proxy-mcwww" Sep 6 01:23:41.355299 kubelet[1885]: I0906 01:23:41.355286 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ktkk\" (UniqueName: \"kubernetes.io/projected/770aaaa3-45ae-4115-b2d2-1549f4a94cb6-kube-api-access-7ktkk\") pod \"kube-proxy-mcwww\" (UID: \"770aaaa3-45ae-4115-b2d2-1549f4a94cb6\") " pod="kube-system/kube-proxy-mcwww" Sep 6 01:23:41.355421 kubelet[1885]: I0906 01:23:41.355406 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-bpf-maps\") pod \"cilium-xpmvw\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " pod="kube-system/cilium-xpmvw" Sep 6 01:23:41.355507 kubelet[1885]: I0906 01:23:41.355494 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-lib-modules\") pod \"cilium-xpmvw\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " pod="kube-system/cilium-xpmvw" Sep 6 01:23:41.355584 kubelet[1885]: I0906 01:23:41.355572 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-host-proc-sys-kernel\") pod \"cilium-xpmvw\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " pod="kube-system/cilium-xpmvw" Sep 6 01:23:41.355656 kubelet[1885]: I0906 01:23:41.355645 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-xtables-lock\") pod \"cilium-xpmvw\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " pod="kube-system/cilium-xpmvw" Sep 6 01:23:41.355729 kubelet[1885]: I0906 01:23:41.355717 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/770aaaa3-45ae-4115-b2d2-1549f4a94cb6-xtables-lock\") pod \"kube-proxy-mcwww\" (UID: \"770aaaa3-45ae-4115-b2d2-1549f4a94cb6\") " pod="kube-system/kube-proxy-mcwww" Sep 6 01:23:41.355854 kubelet[1885]: I0906 01:23:41.355838 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-hostproc\") pod \"cilium-xpmvw\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " pod="kube-system/cilium-xpmvw" Sep 6 01:23:41.355933 kubelet[1885]: I0906 01:23:41.355921 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-cilium-cgroup\") pod \"cilium-xpmvw\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " pod="kube-system/cilium-xpmvw" Sep 6 01:23:41.356056 kubelet[1885]: I0906 01:23:41.355999 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-etc-cni-netd\") pod \"cilium-xpmvw\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " pod="kube-system/cilium-xpmvw" Sep 6 01:23:41.457009 kubelet[1885]: I0906 01:23:41.456966 1885 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 01:23:41.538857 env[1476]: time="2025-09-06T01:23:41.538480342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mcwww,Uid:770aaaa3-45ae-4115-b2d2-1549f4a94cb6,Namespace:kube-system,Attempt:0,}" Sep 6 01:23:41.551773 env[1476]: time="2025-09-06T01:23:41.551499772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xpmvw,Uid:38fbebce-5f6f-4b59-8987-206f45f67155,Namespace:kube-system,Attempt:0,}" Sep 6 01:23:42.209195 kubelet[1885]: E0906 01:23:42.209142 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:43.209518 kubelet[1885]: E0906 01:23:43.209482 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:44.210015 kubelet[1885]: E0906 01:23:44.209954 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:44.829796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3758728183.mount: Deactivated successfully. Sep 6 01:23:44.862213 env[1476]: time="2025-09-06T01:23:44.862163218Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:44.875565 env[1476]: time="2025-09-06T01:23:44.875504554Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:44.882330 env[1476]: time="2025-09-06T01:23:44.882285641Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:44.892779 env[1476]: time="2025-09-06T01:23:44.892698088Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:44.897137 env[1476]: time="2025-09-06T01:23:44.897081841Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:44.902509 env[1476]: time="2025-09-06T01:23:44.902463863Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:44.906579 env[1476]: time="2025-09-06T01:23:44.906520099Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:44.917424 env[1476]: time="2025-09-06T01:23:44.917370022Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:44.973570 env[1476]: time="2025-09-06T01:23:44.968946506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:23:44.973570 env[1476]: time="2025-09-06T01:23:44.968988185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:23:44.973570 env[1476]: time="2025-09-06T01:23:44.968999185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:23:44.973570 env[1476]: time="2025-09-06T01:23:44.969144744Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6503aeb68ea1f10f04479a18a92a87c64ca3c3a9cbfe36ac3a1f096d4ef671b9 pid=1932 runtime=io.containerd.runc.v2 Sep 6 01:23:44.990388 systemd[1]: Started cri-containerd-6503aeb68ea1f10f04479a18a92a87c64ca3c3a9cbfe36ac3a1f096d4ef671b9.scope. Sep 6 01:23:44.998975 env[1476]: time="2025-09-06T01:23:44.992349734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:23:44.998975 env[1476]: time="2025-09-06T01:23:44.992413533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:23:44.998975 env[1476]: time="2025-09-06T01:23:44.992424453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:23:44.998975 env[1476]: time="2025-09-06T01:23:44.992592531Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb pid=1959 runtime=io.containerd.runc.v2 Sep 6 01:23:45.022262 env[1476]: time="2025-09-06T01:23:45.022207465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mcwww,Uid:770aaaa3-45ae-4115-b2d2-1549f4a94cb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6503aeb68ea1f10f04479a18a92a87c64ca3c3a9cbfe36ac3a1f096d4ef671b9\"" Sep 6 01:23:45.025248 env[1476]: time="2025-09-06T01:23:45.025202475Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 6 01:23:45.028497 systemd[1]: Started cri-containerd-42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb.scope. Sep 6 01:23:45.056501 env[1476]: time="2025-09-06T01:23:45.056446999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xpmvw,Uid:38fbebce-5f6f-4b59-8987-206f45f67155,Namespace:kube-system,Attempt:0,} returns sandbox id \"42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb\"" Sep 6 01:23:45.211093 kubelet[1885]: E0906 01:23:45.211013 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:46.211934 kubelet[1885]: E0906 01:23:46.211888 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:46.233132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount886375939.mount: Deactivated successfully. Sep 6 01:23:46.746561 env[1476]: time="2025-09-06T01:23:46.746510019Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:46.754097 env[1476]: time="2025-09-06T01:23:46.754052827Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:46.759597 env[1476]: time="2025-09-06T01:23:46.759550295Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:46.764088 env[1476]: time="2025-09-06T01:23:46.764040493Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:46.764664 env[1476]: time="2025-09-06T01:23:46.764634447Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 6 01:23:46.767540 env[1476]: time="2025-09-06T01:23:46.766917625Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 01:23:46.767912 env[1476]: time="2025-09-06T01:23:46.767879136Z" level=info msg="CreateContainer within sandbox \"6503aeb68ea1f10f04479a18a92a87c64ca3c3a9cbfe36ac3a1f096d4ef671b9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 01:23:46.806079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1031503654.mount: Deactivated successfully. Sep 6 01:23:46.835495 env[1476]: time="2025-09-06T01:23:46.835430456Z" level=info msg="CreateContainer within sandbox \"6503aeb68ea1f10f04479a18a92a87c64ca3c3a9cbfe36ac3a1f096d4ef671b9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"209e34e2090420ddf0025bc56a3eb51b28ea5831da6530c9f4741b76a79c863a\"" Sep 6 01:23:46.836550 env[1476]: time="2025-09-06T01:23:46.836516366Z" level=info msg="StartContainer for \"209e34e2090420ddf0025bc56a3eb51b28ea5831da6530c9f4741b76a79c863a\"" Sep 6 01:23:46.855830 systemd[1]: Started cri-containerd-209e34e2090420ddf0025bc56a3eb51b28ea5831da6530c9f4741b76a79c863a.scope. Sep 6 01:23:46.900758 env[1476]: time="2025-09-06T01:23:46.900679677Z" level=info msg="StartContainer for \"209e34e2090420ddf0025bc56a3eb51b28ea5831da6530c9f4741b76a79c863a\" returns successfully" Sep 6 01:23:47.212830 kubelet[1885]: E0906 01:23:47.212783 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:47.344106 kubelet[1885]: I0906 01:23:47.344032 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mcwww" podStartSLOduration=6.602288647 podStartE2EDuration="8.344013477s" podCreationTimestamp="2025-09-06 01:23:39 +0000 UTC" firstStartedPulling="2025-09-06 01:23:45.024317404 +0000 UTC m=+6.681648147" lastFinishedPulling="2025-09-06 01:23:46.766042234 +0000 UTC m=+8.423372977" observedRunningTime="2025-09-06 01:23:47.343753999 +0000 UTC m=+9.001084702" watchObservedRunningTime="2025-09-06 01:23:47.344013477 +0000 UTC m=+9.001344220" Sep 6 01:23:47.478882 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 6 01:23:48.213326 kubelet[1885]: E0906 01:23:48.213261 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:49.214396 kubelet[1885]: E0906 01:23:49.214347 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:50.214713 kubelet[1885]: E0906 01:23:50.214652 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:51.215029 kubelet[1885]: E0906 01:23:51.214969 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:52.216046 kubelet[1885]: E0906 01:23:52.216004 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:52.287503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1730560234.mount: Deactivated successfully. Sep 6 01:23:53.216475 kubelet[1885]: E0906 01:23:53.216431 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:53.938177 update_engine[1469]: I0906 01:23:53.937788 1469 update_attempter.cc:509] Updating boot flags... Sep 6 01:23:54.218147 kubelet[1885]: E0906 01:23:54.217807 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:54.633502 env[1476]: time="2025-09-06T01:23:54.633447053Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:54.642081 env[1476]: time="2025-09-06T01:23:54.642022884Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:54.647401 env[1476]: time="2025-09-06T01:23:54.647356814Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:23:54.648049 env[1476]: time="2025-09-06T01:23:54.648016090Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 6 01:23:54.651242 env[1476]: time="2025-09-06T01:23:54.651191712Z" level=info msg="CreateContainer within sandbox \"42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 01:23:54.674594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2880092263.mount: Deactivated successfully. Sep 6 01:23:54.679559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2334979124.mount: Deactivated successfully. Sep 6 01:23:54.696505 env[1476]: time="2025-09-06T01:23:54.696440496Z" level=info msg="CreateContainer within sandbox \"42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a\"" Sep 6 01:23:54.697481 env[1476]: time="2025-09-06T01:23:54.697444850Z" level=info msg="StartContainer for \"1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a\"" Sep 6 01:23:54.716826 systemd[1]: Started cri-containerd-1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a.scope. Sep 6 01:23:54.747665 env[1476]: time="2025-09-06T01:23:54.747612487Z" level=info msg="StartContainer for \"1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a\" returns successfully" Sep 6 01:23:54.753989 systemd[1]: cri-containerd-1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a.scope: Deactivated successfully. Sep 6 01:23:55.217949 kubelet[1885]: E0906 01:23:55.217898 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:55.672760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a-rootfs.mount: Deactivated successfully. Sep 6 01:23:56.218954 kubelet[1885]: E0906 01:23:56.218901 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:56.800719 env[1476]: time="2025-09-06T01:23:56.800672570Z" level=info msg="shim disconnected" id=1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a Sep 6 01:23:56.801144 env[1476]: time="2025-09-06T01:23:56.801121688Z" level=warning msg="cleaning up after shim disconnected" id=1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a namespace=k8s.io Sep 6 01:23:56.801206 env[1476]: time="2025-09-06T01:23:56.801193447Z" level=info msg="cleaning up dead shim" Sep 6 01:23:56.808958 env[1476]: time="2025-09-06T01:23:56.808911529Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:23:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2317 runtime=io.containerd.runc.v2\n" Sep 6 01:23:57.219215 kubelet[1885]: E0906 01:23:57.219175 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:57.354285 env[1476]: time="2025-09-06T01:23:57.354095928Z" level=info msg="CreateContainer within sandbox \"42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 01:23:57.393094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1979908187.mount: Deactivated successfully. Sep 6 01:23:57.398584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3091427124.mount: Deactivated successfully. Sep 6 01:23:57.416287 env[1476]: time="2025-09-06T01:23:57.416228798Z" level=info msg="CreateContainer within sandbox \"42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f\"" Sep 6 01:23:57.416841 env[1476]: time="2025-09-06T01:23:57.416813636Z" level=info msg="StartContainer for \"2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f\"" Sep 6 01:23:57.435526 systemd[1]: Started cri-containerd-2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f.scope. Sep 6 01:23:57.469343 env[1476]: time="2025-09-06T01:23:57.469227791Z" level=info msg="StartContainer for \"2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f\" returns successfully" Sep 6 01:23:57.475779 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 01:23:57.475984 systemd[1]: Stopped systemd-sysctl.service. Sep 6 01:23:57.476171 systemd[1]: Stopping systemd-sysctl.service... Sep 6 01:23:57.478613 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:23:57.482947 systemd[1]: cri-containerd-2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f.scope: Deactivated successfully. Sep 6 01:23:57.489676 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:23:57.521398 env[1476]: time="2025-09-06T01:23:57.521343868Z" level=info msg="shim disconnected" id=2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f Sep 6 01:23:57.521398 env[1476]: time="2025-09-06T01:23:57.521392828Z" level=warning msg="cleaning up after shim disconnected" id=2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f namespace=k8s.io Sep 6 01:23:57.521398 env[1476]: time="2025-09-06T01:23:57.521402908Z" level=info msg="cleaning up dead shim" Sep 6 01:23:57.528462 env[1476]: time="2025-09-06T01:23:57.528404315Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:23:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2382 runtime=io.containerd.runc.v2\n" Sep 6 01:23:58.220295 kubelet[1885]: E0906 01:23:58.220241 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:58.355981 env[1476]: time="2025-09-06T01:23:58.355933721Z" level=info msg="CreateContainer within sandbox \"42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 01:23:58.390633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f-rootfs.mount: Deactivated successfully. Sep 6 01:23:58.410519 env[1476]: time="2025-09-06T01:23:58.410449043Z" level=info msg="CreateContainer within sandbox \"42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59\"" Sep 6 01:23:58.411087 env[1476]: time="2025-09-06T01:23:58.411055240Z" level=info msg="StartContainer for \"0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59\"" Sep 6 01:23:58.434683 systemd[1]: run-containerd-runc-k8s.io-0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59-runc.Qbu0BD.mount: Deactivated successfully. Sep 6 01:23:58.437938 systemd[1]: Started cri-containerd-0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59.scope. Sep 6 01:23:58.470180 systemd[1]: cri-containerd-0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59.scope: Deactivated successfully. Sep 6 01:23:58.470893 env[1476]: time="2025-09-06T01:23:58.470529340Z" level=info msg="StartContainer for \"0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59\" returns successfully" Sep 6 01:23:58.503859 env[1476]: time="2025-09-06T01:23:58.503808115Z" level=info msg="shim disconnected" id=0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59 Sep 6 01:23:58.503859 env[1476]: time="2025-09-06T01:23:58.503853755Z" level=warning msg="cleaning up after shim disconnected" id=0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59 namespace=k8s.io Sep 6 01:23:58.503859 env[1476]: time="2025-09-06T01:23:58.503863635Z" level=info msg="cleaning up dead shim" Sep 6 01:23:58.511229 env[1476]: time="2025-09-06T01:23:58.511174283Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:23:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2440 runtime=io.containerd.runc.v2\n" Sep 6 01:23:59.207138 kubelet[1885]: E0906 01:23:59.207095 1885 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:59.220759 kubelet[1885]: E0906 01:23:59.220716 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:23:59.363840 env[1476]: time="2025-09-06T01:23:59.363794213Z" level=info msg="CreateContainer within sandbox \"42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 01:23:59.390639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59-rootfs.mount: Deactivated successfully. Sep 6 01:23:59.394329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2616437435.mount: Deactivated successfully. Sep 6 01:23:59.412982 env[1476]: time="2025-09-06T01:23:59.412917172Z" level=info msg="CreateContainer within sandbox \"42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3\"" Sep 6 01:23:59.413993 env[1476]: time="2025-09-06T01:23:59.413958368Z" level=info msg="StartContainer for \"8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3\"" Sep 6 01:23:59.436749 systemd[1]: Started cri-containerd-8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3.scope. Sep 6 01:23:59.467898 systemd[1]: cri-containerd-8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3.scope: Deactivated successfully. Sep 6 01:23:59.472967 env[1476]: time="2025-09-06T01:23:59.472863806Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38fbebce_5f6f_4b59_8987_206f45f67155.slice/cri-containerd-8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3.scope/memory.events\": no such file or directory" Sep 6 01:23:59.478283 env[1476]: time="2025-09-06T01:23:59.478233064Z" level=info msg="StartContainer for \"8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3\" returns successfully" Sep 6 01:23:59.509276 env[1476]: time="2025-09-06T01:23:59.509227938Z" level=info msg="shim disconnected" id=8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3 Sep 6 01:23:59.509530 env[1476]: time="2025-09-06T01:23:59.509510696Z" level=warning msg="cleaning up after shim disconnected" id=8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3 namespace=k8s.io Sep 6 01:23:59.509591 env[1476]: time="2025-09-06T01:23:59.509578616Z" level=info msg="cleaning up dead shim" Sep 6 01:23:59.517394 env[1476]: time="2025-09-06T01:23:59.517341424Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:23:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2499 runtime=io.containerd.runc.v2\n" Sep 6 01:24:00.222072 kubelet[1885]: E0906 01:24:00.222024 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:00.363269 env[1476]: time="2025-09-06T01:24:00.363226729Z" level=info msg="CreateContainer within sandbox \"42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 01:24:00.390779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3-rootfs.mount: Deactivated successfully. Sep 6 01:24:00.398723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1145895771.mount: Deactivated successfully. Sep 6 01:24:00.405232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1788070349.mount: Deactivated successfully. Sep 6 01:24:00.423216 env[1476]: time="2025-09-06T01:24:00.423147259Z" level=info msg="CreateContainer within sandbox \"42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02\"" Sep 6 01:24:00.424084 env[1476]: time="2025-09-06T01:24:00.424051895Z" level=info msg="StartContainer for \"fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02\"" Sep 6 01:24:00.441620 systemd[1]: Started cri-containerd-fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02.scope. Sep 6 01:24:00.477185 env[1476]: time="2025-09-06T01:24:00.477005332Z" level=info msg="StartContainer for \"fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02\" returns successfully" Sep 6 01:24:00.585759 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 6 01:24:00.640555 kubelet[1885]: I0906 01:24:00.640282 1885 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 6 01:24:01.142769 kernel: Initializing XFRM netlink socket Sep 6 01:24:01.151790 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 6 01:24:01.222947 kubelet[1885]: E0906 01:24:01.222892 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:02.223344 kubelet[1885]: E0906 01:24:02.223303 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:02.821228 systemd-networkd[1626]: cilium_host: Link UP Sep 6 01:24:02.821332 systemd-networkd[1626]: cilium_net: Link UP Sep 6 01:24:02.821334 systemd-networkd[1626]: cilium_net: Gained carrier Sep 6 01:24:02.821446 systemd-networkd[1626]: cilium_host: Gained carrier Sep 6 01:24:02.823996 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 01:24:02.823672 systemd-networkd[1626]: cilium_host: Gained IPv6LL Sep 6 01:24:02.962433 systemd-networkd[1626]: cilium_vxlan: Link UP Sep 6 01:24:02.962439 systemd-networkd[1626]: cilium_vxlan: Gained carrier Sep 6 01:24:03.041967 systemd-networkd[1626]: cilium_net: Gained IPv6LL Sep 6 01:24:03.223817 kubelet[1885]: E0906 01:24:03.223774 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:03.236785 kernel: NET: Registered PF_ALG protocol family Sep 6 01:24:04.078364 systemd-networkd[1626]: lxc_health: Link UP Sep 6 01:24:04.092864 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 01:24:04.093411 systemd-networkd[1626]: lxc_health: Gained carrier Sep 6 01:24:04.224649 kubelet[1885]: E0906 01:24:04.224601 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:04.265933 systemd-networkd[1626]: cilium_vxlan: Gained IPv6LL Sep 6 01:24:05.225531 kubelet[1885]: E0906 01:24:05.225479 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:05.264263 kubelet[1885]: I0906 01:24:05.263301 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xpmvw" podStartSLOduration=16.671763581 podStartE2EDuration="26.26325072s" podCreationTimestamp="2025-09-06 01:23:39 +0000 UTC" firstStartedPulling="2025-09-06 01:23:45.058138702 +0000 UTC m=+6.715469445" lastFinishedPulling="2025-09-06 01:23:54.649625841 +0000 UTC m=+16.306956584" observedRunningTime="2025-09-06 01:24:01.394301863 +0000 UTC m=+23.051632606" watchObservedRunningTime="2025-09-06 01:24:05.26325072 +0000 UTC m=+26.920581463" Sep 6 01:24:05.269678 systemd[1]: Created slice kubepods-besteffort-pod143a643d_9e62_4b22_a10a_0eb3825cd0a6.slice. Sep 6 01:24:05.315017 kubelet[1885]: I0906 01:24:05.314969 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfrx4\" (UniqueName: \"kubernetes.io/projected/143a643d-9e62-4b22-a10a-0eb3825cd0a6-kube-api-access-vfrx4\") pod \"nginx-deployment-7fcdb87857-48dzj\" (UID: \"143a643d-9e62-4b22-a10a-0eb3825cd0a6\") " pod="default/nginx-deployment-7fcdb87857-48dzj" Sep 6 01:24:05.418050 systemd-networkd[1626]: lxc_health: Gained IPv6LL Sep 6 01:24:05.574818 env[1476]: time="2025-09-06T01:24:05.574681813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-48dzj,Uid:143a643d-9e62-4b22-a10a-0eb3825cd0a6,Namespace:default,Attempt:0,}" Sep 6 01:24:05.650706 systemd-networkd[1626]: lxc0f3e8dcf15d3: Link UP Sep 6 01:24:05.660761 kernel: eth0: renamed from tmp350cc Sep 6 01:24:05.676216 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 01:24:05.676351 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0f3e8dcf15d3: link becomes ready Sep 6 01:24:05.678243 systemd-networkd[1626]: lxc0f3e8dcf15d3: Gained carrier Sep 6 01:24:06.226020 kubelet[1885]: E0906 01:24:06.225889 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:07.226867 kubelet[1885]: E0906 01:24:07.226820 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:07.529904 systemd-networkd[1626]: lxc0f3e8dcf15d3: Gained IPv6LL Sep 6 01:24:08.227991 kubelet[1885]: E0906 01:24:08.227951 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:08.464996 kubelet[1885]: I0906 01:24:08.464385 1885 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 01:24:08.911238 env[1476]: time="2025-09-06T01:24:08.911147492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:24:08.911605 env[1476]: time="2025-09-06T01:24:08.911248697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:24:08.911605 env[1476]: time="2025-09-06T01:24:08.911275978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:24:08.911795 env[1476]: time="2025-09-06T01:24:08.911728877Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/350cc3f27a033c16016d8b00af584197f3c3fba8d97276e8531510331074b3fd pid=3022 runtime=io.containerd.runc.v2 Sep 6 01:24:08.931103 systemd[1]: run-containerd-runc-k8s.io-350cc3f27a033c16016d8b00af584197f3c3fba8d97276e8531510331074b3fd-runc.1c3gA9.mount: Deactivated successfully. Sep 6 01:24:08.934914 systemd[1]: Started cri-containerd-350cc3f27a033c16016d8b00af584197f3c3fba8d97276e8531510331074b3fd.scope. Sep 6 01:24:08.967276 env[1476]: time="2025-09-06T01:24:08.967232847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-48dzj,Uid:143a643d-9e62-4b22-a10a-0eb3825cd0a6,Namespace:default,Attempt:0,} returns sandbox id \"350cc3f27a033c16016d8b00af584197f3c3fba8d97276e8531510331074b3fd\"" Sep 6 01:24:08.969712 env[1476]: time="2025-09-06T01:24:08.969617188Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 6 01:24:09.229475 kubelet[1885]: E0906 01:24:09.228904 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:10.229630 kubelet[1885]: E0906 01:24:10.229559 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:11.230594 kubelet[1885]: E0906 01:24:11.230539 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:11.782669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052612679.mount: Deactivated successfully. Sep 6 01:24:12.231160 kubelet[1885]: E0906 01:24:12.231112 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:13.109509 env[1476]: time="2025-09-06T01:24:13.109447391Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:24:13.118582 env[1476]: time="2025-09-06T01:24:13.118527249Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:24:13.125250 env[1476]: time="2025-09-06T01:24:13.125199697Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:24:13.132339 env[1476]: time="2025-09-06T01:24:13.132299681Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:24:13.133159 env[1476]: time="2025-09-06T01:24:13.133121351Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 6 01:24:13.136197 env[1476]: time="2025-09-06T01:24:13.136154384Z" level=info msg="CreateContainer within sandbox \"350cc3f27a033c16016d8b00af584197f3c3fba8d97276e8531510331074b3fd\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Sep 6 01:24:13.164473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3267824919.mount: Deactivated successfully. Sep 6 01:24:13.171452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2780877301.mount: Deactivated successfully. Sep 6 01:24:13.192759 env[1476]: time="2025-09-06T01:24:13.192689525Z" level=info msg="CreateContainer within sandbox \"350cc3f27a033c16016d8b00af584197f3c3fba8d97276e8531510331074b3fd\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"3154ca3265abbd2b5549dadcdf40fa1667b91dac54f700d3b59fb77b7214a405\"" Sep 6 01:24:13.193563 env[1476]: time="2025-09-06T01:24:13.193528676Z" level=info msg="StartContainer for \"3154ca3265abbd2b5549dadcdf40fa1667b91dac54f700d3b59fb77b7214a405\"" Sep 6 01:24:13.212778 systemd[1]: Started cri-containerd-3154ca3265abbd2b5549dadcdf40fa1667b91dac54f700d3b59fb77b7214a405.scope. Sep 6 01:24:13.231473 kubelet[1885]: E0906 01:24:13.231421 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:13.246908 env[1476]: time="2025-09-06T01:24:13.246802536Z" level=info msg="StartContainer for \"3154ca3265abbd2b5549dadcdf40fa1667b91dac54f700d3b59fb77b7214a405\" returns successfully" Sep 6 01:24:13.396934 kubelet[1885]: I0906 01:24:13.396027 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-48dzj" podStartSLOduration=4.229932215 podStartE2EDuration="8.3960038s" podCreationTimestamp="2025-09-06 01:24:05 +0000 UTC" firstStartedPulling="2025-09-06 01:24:08.968418097 +0000 UTC m=+30.625748800" lastFinishedPulling="2025-09-06 01:24:13.134489642 +0000 UTC m=+34.791820385" observedRunningTime="2025-09-06 01:24:13.395978439 +0000 UTC m=+35.053309222" watchObservedRunningTime="2025-09-06 01:24:13.3960038 +0000 UTC m=+35.053334543" Sep 6 01:24:14.231757 kubelet[1885]: E0906 01:24:14.231704 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:15.232505 kubelet[1885]: E0906 01:24:15.232449 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:16.233652 kubelet[1885]: E0906 01:24:16.233586 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:17.233929 kubelet[1885]: E0906 01:24:17.233889 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:18.234915 kubelet[1885]: E0906 01:24:18.234873 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:19.207087 kubelet[1885]: E0906 01:24:19.207042 1885 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:19.235589 kubelet[1885]: E0906 01:24:19.235559 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:20.236365 kubelet[1885]: E0906 01:24:20.236310 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:20.416094 systemd[1]: Created slice kubepods-besteffort-pod8f0f56e1_f24a_4906_a38d_d56bee1f65b4.slice. Sep 6 01:24:20.502423 kubelet[1885]: I0906 01:24:20.502056 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfpc4\" (UniqueName: \"kubernetes.io/projected/8f0f56e1-f24a-4906-a38d-d56bee1f65b4-kube-api-access-lfpc4\") pod \"nfs-server-provisioner-0\" (UID: \"8f0f56e1-f24a-4906-a38d-d56bee1f65b4\") " pod="default/nfs-server-provisioner-0" Sep 6 01:24:20.502662 kubelet[1885]: I0906 01:24:20.502639 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/8f0f56e1-f24a-4906-a38d-d56bee1f65b4-data\") pod \"nfs-server-provisioner-0\" (UID: \"8f0f56e1-f24a-4906-a38d-d56bee1f65b4\") " pod="default/nfs-server-provisioner-0" Sep 6 01:24:20.721069 env[1476]: time="2025-09-06T01:24:20.720969344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8f0f56e1-f24a-4906-a38d-d56bee1f65b4,Namespace:default,Attempt:0,}" Sep 6 01:24:20.831533 systemd-networkd[1626]: lxc7fa40cfde073: Link UP Sep 6 01:24:20.847398 kernel: eth0: renamed from tmpafd12 Sep 6 01:24:20.861115 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 01:24:20.861239 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7fa40cfde073: link becomes ready Sep 6 01:24:20.861925 systemd-networkd[1626]: lxc7fa40cfde073: Gained carrier Sep 6 01:24:21.062788 env[1476]: time="2025-09-06T01:24:21.062684763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:24:21.063051 env[1476]: time="2025-09-06T01:24:21.062994813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:24:21.063190 env[1476]: time="2025-09-06T01:24:21.063150137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:24:21.063571 env[1476]: time="2025-09-06T01:24:21.063510068Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/afd12e0d4a6bf6f22069d9d04abee94b257e6b18a6e70e22b3492b7bf4af45b4 pid=3146 runtime=io.containerd.runc.v2 Sep 6 01:24:21.079537 systemd[1]: Started cri-containerd-afd12e0d4a6bf6f22069d9d04abee94b257e6b18a6e70e22b3492b7bf4af45b4.scope. Sep 6 01:24:21.120046 env[1476]: time="2025-09-06T01:24:21.119991920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8f0f56e1-f24a-4906-a38d-d56bee1f65b4,Namespace:default,Attempt:0,} returns sandbox id \"afd12e0d4a6bf6f22069d9d04abee94b257e6b18a6e70e22b3492b7bf4af45b4\"" Sep 6 01:24:21.121984 env[1476]: time="2025-09-06T01:24:21.121717132Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Sep 6 01:24:21.236693 kubelet[1885]: E0906 01:24:21.236612 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:21.616019 systemd[1]: run-containerd-runc-k8s.io-afd12e0d4a6bf6f22069d9d04abee94b257e6b18a6e70e22b3492b7bf4af45b4-runc.hzx7nT.mount: Deactivated successfully. Sep 6 01:24:21.929995 systemd-networkd[1626]: lxc7fa40cfde073: Gained IPv6LL Sep 6 01:24:22.237263 kubelet[1885]: E0906 01:24:22.236890 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:23.237354 kubelet[1885]: E0906 01:24:23.237307 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:23.478279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2189592692.mount: Deactivated successfully. Sep 6 01:24:24.237902 kubelet[1885]: E0906 01:24:24.237831 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:25.238793 kubelet[1885]: E0906 01:24:25.238714 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:25.604675 env[1476]: time="2025-09-06T01:24:25.604608626Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:24:25.616614 env[1476]: time="2025-09-06T01:24:25.616547708Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:24:25.622684 env[1476]: time="2025-09-06T01:24:25.622630512Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:24:25.628279 env[1476]: time="2025-09-06T01:24:25.628211743Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:24:25.629300 env[1476]: time="2025-09-06T01:24:25.629259531Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Sep 6 01:24:25.635812 env[1476]: time="2025-09-06T01:24:25.635761507Z" level=info msg="CreateContainer within sandbox \"afd12e0d4a6bf6f22069d9d04abee94b257e6b18a6e70e22b3492b7bf4af45b4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Sep 6 01:24:25.670391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3430502386.mount: Deactivated successfully. Sep 6 01:24:25.676586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2798238831.mount: Deactivated successfully. Sep 6 01:24:25.693654 env[1476]: time="2025-09-06T01:24:25.693593067Z" level=info msg="CreateContainer within sandbox \"afd12e0d4a6bf6f22069d9d04abee94b257e6b18a6e70e22b3492b7bf4af45b4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"d8e998f86610e11f7eb08deb5e3bceddfc02b7061ba3b8d1afb054d733ba24cd\"" Sep 6 01:24:25.694553 env[1476]: time="2025-09-06T01:24:25.694446210Z" level=info msg="StartContainer for \"d8e998f86610e11f7eb08deb5e3bceddfc02b7061ba3b8d1afb054d733ba24cd\"" Sep 6 01:24:25.714856 systemd[1]: Started cri-containerd-d8e998f86610e11f7eb08deb5e3bceddfc02b7061ba3b8d1afb054d733ba24cd.scope. Sep 6 01:24:25.749988 env[1476]: time="2025-09-06T01:24:25.749922386Z" level=info msg="StartContainer for \"d8e998f86610e11f7eb08deb5e3bceddfc02b7061ba3b8d1afb054d733ba24cd\" returns successfully" Sep 6 01:24:26.239486 kubelet[1885]: E0906 01:24:26.239439 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:27.240157 kubelet[1885]: E0906 01:24:27.240113 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:28.241174 kubelet[1885]: E0906 01:24:28.241116 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:29.242068 kubelet[1885]: E0906 01:24:29.242018 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:30.242984 kubelet[1885]: E0906 01:24:30.242942 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:31.244325 kubelet[1885]: E0906 01:24:31.244269 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:32.244693 kubelet[1885]: E0906 01:24:32.244637 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:33.244865 kubelet[1885]: E0906 01:24:33.244810 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:34.245778 kubelet[1885]: E0906 01:24:34.245730 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:35.246722 kubelet[1885]: E0906 01:24:35.246676 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:35.896134 kubelet[1885]: I0906 01:24:35.896074 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.387136876 podStartE2EDuration="15.896057834s" podCreationTimestamp="2025-09-06 01:24:20 +0000 UTC" firstStartedPulling="2025-09-06 01:24:21.121431243 +0000 UTC m=+42.778761986" lastFinishedPulling="2025-09-06 01:24:25.630352201 +0000 UTC m=+47.287682944" observedRunningTime="2025-09-06 01:24:26.428226119 +0000 UTC m=+48.085556902" watchObservedRunningTime="2025-09-06 01:24:35.896057834 +0000 UTC m=+57.553388577" Sep 6 01:24:35.900670 systemd[1]: Created slice kubepods-besteffort-pod4ca9d6c5_d78b_43c3_8731_0ea34c9c8d17.slice. Sep 6 01:24:35.993587 kubelet[1885]: I0906 01:24:35.993543 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spg7c\" (UniqueName: \"kubernetes.io/projected/4ca9d6c5-d78b-43c3-8731-0ea34c9c8d17-kube-api-access-spg7c\") pod \"test-pod-1\" (UID: \"4ca9d6c5-d78b-43c3-8731-0ea34c9c8d17\") " pod="default/test-pod-1" Sep 6 01:24:35.993822 kubelet[1885]: I0906 01:24:35.993804 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-10e828d0-a407-4a2d-9050-d6ffd59ff33c\" (UniqueName: \"kubernetes.io/nfs/4ca9d6c5-d78b-43c3-8731-0ea34c9c8d17-pvc-10e828d0-a407-4a2d-9050-d6ffd59ff33c\") pod \"test-pod-1\" (UID: \"4ca9d6c5-d78b-43c3-8731-0ea34c9c8d17\") " pod="default/test-pod-1" Sep 6 01:24:36.174762 kernel: FS-Cache: Loaded Sep 6 01:24:36.247811 kubelet[1885]: E0906 01:24:36.247767 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:36.339078 kernel: RPC: Registered named UNIX socket transport module. Sep 6 01:24:36.339210 kernel: RPC: Registered udp transport module. Sep 6 01:24:36.342716 kernel: RPC: Registered tcp transport module. Sep 6 01:24:36.348761 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 6 01:24:36.458760 kernel: FS-Cache: Netfs 'nfs' registered for caching Sep 6 01:24:36.625708 kernel: NFS: Registering the id_resolver key type Sep 6 01:24:36.625873 kernel: Key type id_resolver registered Sep 6 01:24:36.625899 kernel: Key type id_legacy registered Sep 6 01:24:37.248494 kubelet[1885]: E0906 01:24:37.248443 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:37.306363 nfsidmap[3266]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.8-n-9a681c3ae9' Sep 6 01:24:37.315877 nfsidmap[3267]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.8-n-9a681c3ae9' Sep 6 01:24:37.404295 env[1476]: time="2025-09-06T01:24:37.404243256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4ca9d6c5-d78b-43c3-8731-0ea34c9c8d17,Namespace:default,Attempt:0,}" Sep 6 01:24:37.471934 systemd-networkd[1626]: lxc7ed2f03df3e6: Link UP Sep 6 01:24:37.485804 kernel: eth0: renamed from tmp50b23 Sep 6 01:24:37.502013 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 01:24:37.502167 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7ed2f03df3e6: link becomes ready Sep 6 01:24:37.502265 systemd-networkd[1626]: lxc7ed2f03df3e6: Gained carrier Sep 6 01:24:37.693557 env[1476]: time="2025-09-06T01:24:37.693466366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:24:37.693557 env[1476]: time="2025-09-06T01:24:37.693512287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:24:37.693890 env[1476]: time="2025-09-06T01:24:37.693523288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:24:37.693890 env[1476]: time="2025-09-06T01:24:37.693814373Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/50b23c7e44b8a98aa521191575acb1e4275af64a9af70bcf7e6515e8746f4039 pid=3294 runtime=io.containerd.runc.v2 Sep 6 01:24:37.710691 systemd[1]: Started cri-containerd-50b23c7e44b8a98aa521191575acb1e4275af64a9af70bcf7e6515e8746f4039.scope. Sep 6 01:24:37.745033 env[1476]: time="2025-09-06T01:24:37.744990598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4ca9d6c5-d78b-43c3-8731-0ea34c9c8d17,Namespace:default,Attempt:0,} returns sandbox id \"50b23c7e44b8a98aa521191575acb1e4275af64a9af70bcf7e6515e8746f4039\"" Sep 6 01:24:37.747110 env[1476]: time="2025-09-06T01:24:37.747075160Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 6 01:24:38.034970 env[1476]: time="2025-09-06T01:24:38.034907387Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:24:38.045037 env[1476]: time="2025-09-06T01:24:38.044979223Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:24:38.050666 env[1476]: time="2025-09-06T01:24:38.050619694Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:24:38.058670 env[1476]: time="2025-09-06T01:24:38.058613810Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:24:38.059546 env[1476]: time="2025-09-06T01:24:38.059513108Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 6 01:24:38.062137 env[1476]: time="2025-09-06T01:24:38.062082878Z" level=info msg="CreateContainer within sandbox \"50b23c7e44b8a98aa521191575acb1e4275af64a9af70bcf7e6515e8746f4039\" for container &ContainerMetadata{Name:test,Attempt:0,}" Sep 6 01:24:38.111199 env[1476]: time="2025-09-06T01:24:38.111115477Z" level=info msg="CreateContainer within sandbox \"50b23c7e44b8a98aa521191575acb1e4275af64a9af70bcf7e6515e8746f4039\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e4b15098e1f5b0ec271c571207adb86a90fd65e18ec49a954423d54583899401\"" Sep 6 01:24:38.112076 env[1476]: time="2025-09-06T01:24:38.112045335Z" level=info msg="StartContainer for \"e4b15098e1f5b0ec271c571207adb86a90fd65e18ec49a954423d54583899401\"" Sep 6 01:24:38.127189 systemd[1]: Started cri-containerd-e4b15098e1f5b0ec271c571207adb86a90fd65e18ec49a954423d54583899401.scope. Sep 6 01:24:38.161879 env[1476]: time="2025-09-06T01:24:38.161817708Z" level=info msg="StartContainer for \"e4b15098e1f5b0ec271c571207adb86a90fd65e18ec49a954423d54583899401\" returns successfully" Sep 6 01:24:38.249142 kubelet[1885]: E0906 01:24:38.249076 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:38.450219 kubelet[1885]: I0906 01:24:38.450146 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.135702718 podStartE2EDuration="17.450126945s" podCreationTimestamp="2025-09-06 01:24:21 +0000 UTC" firstStartedPulling="2025-09-06 01:24:37.746116941 +0000 UTC m=+59.403447644" lastFinishedPulling="2025-09-06 01:24:38.060541128 +0000 UTC m=+59.717871871" observedRunningTime="2025-09-06 01:24:38.449915101 +0000 UTC m=+60.107245844" watchObservedRunningTime="2025-09-06 01:24:38.450126945 +0000 UTC m=+60.107457688" Sep 6 01:24:39.207372 kubelet[1885]: E0906 01:24:39.207322 1885 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:39.209903 systemd-networkd[1626]: lxc7ed2f03df3e6: Gained IPv6LL Sep 6 01:24:39.249797 kubelet[1885]: E0906 01:24:39.249761 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:40.251323 kubelet[1885]: E0906 01:24:40.251265 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:41.252016 kubelet[1885]: E0906 01:24:41.251974 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:42.253151 kubelet[1885]: E0906 01:24:42.253108 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:42.617562 systemd[1]: run-containerd-runc-k8s.io-fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02-runc.d7n7UR.mount: Deactivated successfully. Sep 6 01:24:42.633953 env[1476]: time="2025-09-06T01:24:42.633892869Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 01:24:42.641979 env[1476]: time="2025-09-06T01:24:42.641936932Z" level=info msg="StopContainer for \"fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02\" with timeout 2 (s)" Sep 6 01:24:42.642511 env[1476]: time="2025-09-06T01:24:42.642486382Z" level=info msg="Stop container \"fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02\" with signal terminated" Sep 6 01:24:42.649028 systemd-networkd[1626]: lxc_health: Link DOWN Sep 6 01:24:42.649040 systemd-networkd[1626]: lxc_health: Lost carrier Sep 6 01:24:42.673267 systemd[1]: cri-containerd-fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02.scope: Deactivated successfully. Sep 6 01:24:42.673625 systemd[1]: cri-containerd-fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02.scope: Consumed 6.553s CPU time. Sep 6 01:24:42.691357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02-rootfs.mount: Deactivated successfully. Sep 6 01:24:43.253673 kubelet[1885]: E0906 01:24:43.253624 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:43.617876 env[1476]: time="2025-09-06T01:24:43.617816315Z" level=info msg="shim disconnected" id=fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02 Sep 6 01:24:43.617876 env[1476]: time="2025-09-06T01:24:43.617872236Z" level=warning msg="cleaning up after shim disconnected" id=fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02 namespace=k8s.io Sep 6 01:24:43.617876 env[1476]: time="2025-09-06T01:24:43.617882676Z" level=info msg="cleaning up dead shim" Sep 6 01:24:43.625717 env[1476]: time="2025-09-06T01:24:43.625662971Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3425 runtime=io.containerd.runc.v2\n" Sep 6 01:24:43.632884 env[1476]: time="2025-09-06T01:24:43.632819456Z" level=info msg="StopContainer for \"fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02\" returns successfully" Sep 6 01:24:43.633589 env[1476]: time="2025-09-06T01:24:43.633545749Z" level=info msg="StopPodSandbox for \"42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb\"" Sep 6 01:24:43.633677 env[1476]: time="2025-09-06T01:24:43.633614190Z" level=info msg="Container to stop \"1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:43.633677 env[1476]: time="2025-09-06T01:24:43.633629870Z" level=info msg="Container to stop \"0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:43.633677 env[1476]: time="2025-09-06T01:24:43.633644190Z" level=info msg="Container to stop \"2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:43.633677 env[1476]: time="2025-09-06T01:24:43.633655671Z" level=info msg="Container to stop \"8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:43.633677 env[1476]: time="2025-09-06T01:24:43.633665711Z" level=info msg="Container to stop \"fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:43.635475 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb-shm.mount: Deactivated successfully. Sep 6 01:24:43.642631 systemd[1]: cri-containerd-42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb.scope: Deactivated successfully. Sep 6 01:24:43.664343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb-rootfs.mount: Deactivated successfully. Sep 6 01:24:43.679276 env[1476]: time="2025-09-06T01:24:43.679226384Z" level=info msg="shim disconnected" id=42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb Sep 6 01:24:43.679785 env[1476]: time="2025-09-06T01:24:43.679731433Z" level=warning msg="cleaning up after shim disconnected" id=42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb namespace=k8s.io Sep 6 01:24:43.679862 env[1476]: time="2025-09-06T01:24:43.679847875Z" level=info msg="cleaning up dead shim" Sep 6 01:24:43.688098 env[1476]: time="2025-09-06T01:24:43.688052778Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3456 runtime=io.containerd.runc.v2\n" Sep 6 01:24:43.688594 env[1476]: time="2025-09-06T01:24:43.688561227Z" level=info msg="TearDown network for sandbox \"42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb\" successfully" Sep 6 01:24:43.688684 env[1476]: time="2025-09-06T01:24:43.688667069Z" level=info msg="StopPodSandbox for \"42e7243c0ab5f43b19eecc89fbd2cab76041a2f04e70ec743cc47d2c174503fb\" returns successfully" Sep 6 01:24:43.737814 kubelet[1885]: I0906 01:24:43.737768 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-cilium-run\") pod \"38fbebce-5f6f-4b59-8987-206f45f67155\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " Sep 6 01:24:43.737989 kubelet[1885]: I0906 01:24:43.737876 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "38fbebce-5f6f-4b59-8987-206f45f67155" (UID: "38fbebce-5f6f-4b59-8987-206f45f67155"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:43.737989 kubelet[1885]: I0906 01:24:43.737924 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-host-proc-sys-kernel\") pod \"38fbebce-5f6f-4b59-8987-206f45f67155\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " Sep 6 01:24:43.737989 kubelet[1885]: I0906 01:24:43.737946 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-bpf-maps\") pod \"38fbebce-5f6f-4b59-8987-206f45f67155\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " Sep 6 01:24:43.737989 kubelet[1885]: I0906 01:24:43.737961 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-cni-path\") pod \"38fbebce-5f6f-4b59-8987-206f45f67155\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " Sep 6 01:24:43.738096 kubelet[1885]: I0906 01:24:43.737993 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "38fbebce-5f6f-4b59-8987-206f45f67155" (UID: "38fbebce-5f6f-4b59-8987-206f45f67155"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:43.738096 kubelet[1885]: I0906 01:24:43.738013 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "38fbebce-5f6f-4b59-8987-206f45f67155" (UID: "38fbebce-5f6f-4b59-8987-206f45f67155"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:43.738096 kubelet[1885]: I0906 01:24:43.738035 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38fbebce-5f6f-4b59-8987-206f45f67155-cilium-config-path\") pod \"38fbebce-5f6f-4b59-8987-206f45f67155\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " Sep 6 01:24:43.738096 kubelet[1885]: I0906 01:24:43.738070 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-cni-path" (OuterVolumeSpecName: "cni-path") pod "38fbebce-5f6f-4b59-8987-206f45f67155" (UID: "38fbebce-5f6f-4b59-8987-206f45f67155"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:43.738406 kubelet[1885]: I0906 01:24:43.738051 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-host-proc-sys-net\") pod \"38fbebce-5f6f-4b59-8987-206f45f67155\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " Sep 6 01:24:43.738441 kubelet[1885]: I0906 01:24:43.738417 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38fbebce-5f6f-4b59-8987-206f45f67155-hubble-tls\") pod \"38fbebce-5f6f-4b59-8987-206f45f67155\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " Sep 6 01:24:43.738441 kubelet[1885]: I0906 01:24:43.738436 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg25t\" (UniqueName: \"kubernetes.io/projected/38fbebce-5f6f-4b59-8987-206f45f67155-kube-api-access-lg25t\") pod \"38fbebce-5f6f-4b59-8987-206f45f67155\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " Sep 6 01:24:43.738503 kubelet[1885]: I0906 01:24:43.738483 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "38fbebce-5f6f-4b59-8987-206f45f67155" (UID: "38fbebce-5f6f-4b59-8987-206f45f67155"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:43.738822 kubelet[1885]: I0906 01:24:43.738800 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-lib-modules\") pod \"38fbebce-5f6f-4b59-8987-206f45f67155\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " Sep 6 01:24:43.738878 kubelet[1885]: I0906 01:24:43.738832 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-etc-cni-netd\") pod \"38fbebce-5f6f-4b59-8987-206f45f67155\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " Sep 6 01:24:43.738878 kubelet[1885]: I0906 01:24:43.738863 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38fbebce-5f6f-4b59-8987-206f45f67155-clustermesh-secrets\") pod \"38fbebce-5f6f-4b59-8987-206f45f67155\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " Sep 6 01:24:43.738931 kubelet[1885]: I0906 01:24:43.738880 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-xtables-lock\") pod \"38fbebce-5f6f-4b59-8987-206f45f67155\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " Sep 6 01:24:43.738931 kubelet[1885]: I0906 01:24:43.738897 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-hostproc\") pod \"38fbebce-5f6f-4b59-8987-206f45f67155\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " Sep 6 01:24:43.738931 kubelet[1885]: I0906 01:24:43.738912 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-cilium-cgroup\") pod \"38fbebce-5f6f-4b59-8987-206f45f67155\" (UID: \"38fbebce-5f6f-4b59-8987-206f45f67155\") " Sep 6 01:24:43.739047 kubelet[1885]: I0906 01:24:43.738956 1885 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-cilium-run\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:43.739047 kubelet[1885]: I0906 01:24:43.738966 1885 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-host-proc-sys-kernel\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:43.739047 kubelet[1885]: I0906 01:24:43.738978 1885 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-bpf-maps\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:43.739047 kubelet[1885]: I0906 01:24:43.738986 1885 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-cni-path\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:43.739047 kubelet[1885]: I0906 01:24:43.739027 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "38fbebce-5f6f-4b59-8987-206f45f67155" (UID: "38fbebce-5f6f-4b59-8987-206f45f67155"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:43.741055 kubelet[1885]: I0906 01:24:43.741000 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fbebce-5f6f-4b59-8987-206f45f67155-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "38fbebce-5f6f-4b59-8987-206f45f67155" (UID: "38fbebce-5f6f-4b59-8987-206f45f67155"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 01:24:43.741185 kubelet[1885]: I0906 01:24:43.741087 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "38fbebce-5f6f-4b59-8987-206f45f67155" (UID: "38fbebce-5f6f-4b59-8987-206f45f67155"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:43.741185 kubelet[1885]: I0906 01:24:43.741108 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "38fbebce-5f6f-4b59-8987-206f45f67155" (UID: "38fbebce-5f6f-4b59-8987-206f45f67155"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:43.741185 kubelet[1885]: I0906 01:24:43.741128 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "38fbebce-5f6f-4b59-8987-206f45f67155" (UID: "38fbebce-5f6f-4b59-8987-206f45f67155"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:43.743160 systemd[1]: var-lib-kubelet-pods-38fbebce\x2d5f6f\x2d4b59\x2d8987\x2d206f45f67155-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 01:24:43.744111 kubelet[1885]: I0906 01:24:43.744055 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-hostproc" (OuterVolumeSpecName: "hostproc") pod "38fbebce-5f6f-4b59-8987-206f45f67155" (UID: "38fbebce-5f6f-4b59-8987-206f45f67155"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:43.744981 kubelet[1885]: I0906 01:24:43.744925 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38fbebce-5f6f-4b59-8987-206f45f67155-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "38fbebce-5f6f-4b59-8987-206f45f67155" (UID: "38fbebce-5f6f-4b59-8987-206f45f67155"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:24:43.748481 systemd[1]: var-lib-kubelet-pods-38fbebce\x2d5f6f\x2d4b59\x2d8987\x2d206f45f67155-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 01:24:43.752285 kubelet[1885]: I0906 01:24:43.752240 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38fbebce-5f6f-4b59-8987-206f45f67155-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "38fbebce-5f6f-4b59-8987-206f45f67155" (UID: "38fbebce-5f6f-4b59-8987-206f45f67155"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 01:24:43.752581 kubelet[1885]: I0906 01:24:43.752551 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38fbebce-5f6f-4b59-8987-206f45f67155-kube-api-access-lg25t" (OuterVolumeSpecName: "kube-api-access-lg25t") pod "38fbebce-5f6f-4b59-8987-206f45f67155" (UID: "38fbebce-5f6f-4b59-8987-206f45f67155"). InnerVolumeSpecName "kube-api-access-lg25t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:24:43.752986 systemd[1]: var-lib-kubelet-pods-38fbebce\x2d5f6f\x2d4b59\x2d8987\x2d206f45f67155-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlg25t.mount: Deactivated successfully. Sep 6 01:24:43.839476 kubelet[1885]: I0906 01:24:43.839439 1885 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-host-proc-sys-net\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:43.839661 kubelet[1885]: I0906 01:24:43.839649 1885 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38fbebce-5f6f-4b59-8987-206f45f67155-hubble-tls\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:43.839781 kubelet[1885]: I0906 01:24:43.839768 1885 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lg25t\" (UniqueName: \"kubernetes.io/projected/38fbebce-5f6f-4b59-8987-206f45f67155-kube-api-access-lg25t\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:43.839864 kubelet[1885]: I0906 01:24:43.839853 1885 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-lib-modules\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:43.839933 kubelet[1885]: I0906 01:24:43.839922 1885 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38fbebce-5f6f-4b59-8987-206f45f67155-cilium-config-path\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:43.839995 kubelet[1885]: I0906 01:24:43.839986 1885 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38fbebce-5f6f-4b59-8987-206f45f67155-clustermesh-secrets\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:43.840066 kubelet[1885]: I0906 01:24:43.840056 1885 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-xtables-lock\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:43.840132 kubelet[1885]: I0906 01:24:43.840121 1885 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-hostproc\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:43.840203 kubelet[1885]: I0906 01:24:43.840193 1885 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-cilium-cgroup\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:43.840267 kubelet[1885]: I0906 01:24:43.840257 1885 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38fbebce-5f6f-4b59-8987-206f45f67155-etc-cni-netd\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:44.254322 kubelet[1885]: E0906 01:24:44.254267 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:44.328274 kubelet[1885]: E0906 01:24:44.328238 1885 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 01:24:44.448134 kubelet[1885]: I0906 01:24:44.448104 1885 scope.go:117] "RemoveContainer" containerID="fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02" Sep 6 01:24:44.450047 env[1476]: time="2025-09-06T01:24:44.449995040Z" level=info msg="RemoveContainer for \"fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02\"" Sep 6 01:24:44.452363 systemd[1]: Removed slice kubepods-burstable-pod38fbebce_5f6f_4b59_8987_206f45f67155.slice. Sep 6 01:24:44.452513 systemd[1]: kubepods-burstable-pod38fbebce_5f6f_4b59_8987_206f45f67155.slice: Consumed 6.651s CPU time. Sep 6 01:24:44.465438 env[1476]: time="2025-09-06T01:24:44.465380782Z" level=info msg="RemoveContainer for \"fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02\" returns successfully" Sep 6 01:24:44.465790 kubelet[1885]: I0906 01:24:44.465762 1885 scope.go:117] "RemoveContainer" containerID="8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3" Sep 6 01:24:44.467075 env[1476]: time="2025-09-06T01:24:44.467017930Z" level=info msg="RemoveContainer for \"8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3\"" Sep 6 01:24:44.475952 env[1476]: time="2025-09-06T01:24:44.475730518Z" level=info msg="RemoveContainer for \"8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3\" returns successfully" Sep 6 01:24:44.476326 kubelet[1885]: I0906 01:24:44.476287 1885 scope.go:117] "RemoveContainer" containerID="0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59" Sep 6 01:24:44.478976 env[1476]: time="2025-09-06T01:24:44.478639688Z" level=info msg="RemoveContainer for \"0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59\"" Sep 6 01:24:44.488441 env[1476]: time="2025-09-06T01:24:44.488384414Z" level=info msg="RemoveContainer for \"0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59\" returns successfully" Sep 6 01:24:44.488867 kubelet[1885]: I0906 01:24:44.488844 1885 scope.go:117] "RemoveContainer" containerID="2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f" Sep 6 01:24:44.490173 env[1476]: time="2025-09-06T01:24:44.490137044Z" level=info msg="RemoveContainer for \"2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f\"" Sep 6 01:24:44.499772 env[1476]: time="2025-09-06T01:24:44.499697167Z" level=info msg="RemoveContainer for \"2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f\" returns successfully" Sep 6 01:24:44.500056 kubelet[1885]: I0906 01:24:44.499995 1885 scope.go:117] "RemoveContainer" containerID="1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a" Sep 6 01:24:44.501489 env[1476]: time="2025-09-06T01:24:44.501330714Z" level=info msg="RemoveContainer for \"1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a\"" Sep 6 01:24:44.517056 env[1476]: time="2025-09-06T01:24:44.516103886Z" level=info msg="RemoveContainer for \"1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a\" returns successfully" Sep 6 01:24:44.517056 env[1476]: time="2025-09-06T01:24:44.516669936Z" level=error msg="ContainerStatus for \"fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02\": not found" Sep 6 01:24:44.517215 kubelet[1885]: I0906 01:24:44.516390 1885 scope.go:117] "RemoveContainer" containerID="fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02" Sep 6 01:24:44.517215 kubelet[1885]: E0906 01:24:44.516891 1885 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02\": not found" containerID="fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02" Sep 6 01:24:44.517215 kubelet[1885]: I0906 01:24:44.516920 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02"} err="failed to get container status \"fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe10ab1538e2b167086e657f566462ac5ebb164ba2506e85f14ce188e181bf02\": not found" Sep 6 01:24:44.517215 kubelet[1885]: I0906 01:24:44.516995 1885 scope.go:117] "RemoveContainer" containerID="8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3" Sep 6 01:24:44.517329 env[1476]: time="2025-09-06T01:24:44.517169464Z" level=error msg="ContainerStatus for \"8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3\": not found" Sep 6 01:24:44.517355 kubelet[1885]: E0906 01:24:44.517297 1885 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3\": not found" containerID="8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3" Sep 6 01:24:44.517355 kubelet[1885]: I0906 01:24:44.517319 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3"} err="failed to get container status \"8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b7732ba47089ef0589ce90e66a93154c1dd4b98d0d1e784fc770b95d185a6b3\": not found" Sep 6 01:24:44.517355 kubelet[1885]: I0906 01:24:44.517334 1885 scope.go:117] "RemoveContainer" containerID="0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59" Sep 6 01:24:44.517543 env[1476]: time="2025-09-06T01:24:44.517488790Z" level=error msg="ContainerStatus for \"0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59\": not found" Sep 6 01:24:44.517663 kubelet[1885]: E0906 01:24:44.517636 1885 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59\": not found" containerID="0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59" Sep 6 01:24:44.517711 kubelet[1885]: I0906 01:24:44.517664 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59"} err="failed to get container status \"0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d71b4acc1840dfe2a15de96232750db3336b931f4c0e5a60e60f65f7eb7da59\": not found" Sep 6 01:24:44.517711 kubelet[1885]: I0906 01:24:44.517680 1885 scope.go:117] "RemoveContainer" containerID="2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f" Sep 6 01:24:44.517904 env[1476]: time="2025-09-06T01:24:44.517854236Z" level=error msg="ContainerStatus for \"2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f\": not found" Sep 6 01:24:44.518012 kubelet[1885]: E0906 01:24:44.517988 1885 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f\": not found" containerID="2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f" Sep 6 01:24:44.518059 kubelet[1885]: I0906 01:24:44.518013 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f"} err="failed to get container status \"2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e6d79101dd80428236116904878a9ca7337d71ca0314939b0e18f42aec48e7f\": not found" Sep 6 01:24:44.518059 kubelet[1885]: I0906 01:24:44.518031 1885 scope.go:117] "RemoveContainer" containerID="1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a" Sep 6 01:24:44.518244 env[1476]: time="2025-09-06T01:24:44.518196482Z" level=error msg="ContainerStatus for \"1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a\": not found" Sep 6 01:24:44.518394 kubelet[1885]: E0906 01:24:44.518336 1885 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a\": not found" containerID="1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a" Sep 6 01:24:44.518394 kubelet[1885]: I0906 01:24:44.518361 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a"} err="failed to get container status \"1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a\": rpc error: code = NotFound desc = an error occurred when try to find container \"1dab0a7a4bcdd07215210273b6f62698353babfb35e602a7f80b58794a44e72a\": not found" Sep 6 01:24:45.255046 kubelet[1885]: E0906 01:24:45.254980 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:45.316876 kubelet[1885]: I0906 01:24:45.316834 1885 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38fbebce-5f6f-4b59-8987-206f45f67155" path="/var/lib/kubelet/pods/38fbebce-5f6f-4b59-8987-206f45f67155/volumes" Sep 6 01:24:46.255631 kubelet[1885]: E0906 01:24:46.255580 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:46.331404 kubelet[1885]: I0906 01:24:46.331342 1885 memory_manager.go:355] "RemoveStaleState removing state" podUID="38fbebce-5f6f-4b59-8987-206f45f67155" containerName="cilium-agent" Sep 6 01:24:46.336744 systemd[1]: Created slice kubepods-besteffort-pod7b7b03b8_1c07_4cf9_bd3f_897d0c4e2e3c.slice. Sep 6 01:24:46.342018 systemd[1]: Created slice kubepods-burstable-pod467267fd_7cf5_468a_88a9_2c5ca4f2fb37.slice. Sep 6 01:24:46.354104 kubelet[1885]: I0906 01:24:46.354066 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-hubble-tls\") pod \"cilium-6q8hs\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " pod="kube-system/cilium-6q8hs" Sep 6 01:24:46.354383 kubelet[1885]: I0906 01:24:46.354314 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-bpf-maps\") pod \"cilium-6q8hs\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " pod="kube-system/cilium-6q8hs" Sep 6 01:24:46.355618 kubelet[1885]: I0906 01:24:46.355583 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cni-path\") pod \"cilium-6q8hs\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " pod="kube-system/cilium-6q8hs" Sep 6 01:24:46.355909 kubelet[1885]: I0906 01:24:46.355891 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cilium-ipsec-secrets\") pod \"cilium-6q8hs\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " pod="kube-system/cilium-6q8hs" Sep 6 01:24:46.356046 kubelet[1885]: I0906 01:24:46.356029 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-lib-modules\") pod \"cilium-6q8hs\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " pod="kube-system/cilium-6q8hs" Sep 6 01:24:46.356160 kubelet[1885]: I0906 01:24:46.356147 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-xtables-lock\") pod \"cilium-6q8hs\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " pod="kube-system/cilium-6q8hs" Sep 6 01:24:46.356273 kubelet[1885]: I0906 01:24:46.356260 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cilium-config-path\") pod \"cilium-6q8hs\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " pod="kube-system/cilium-6q8hs" Sep 6 01:24:46.356395 kubelet[1885]: I0906 01:24:46.356381 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwl2f\" (UniqueName: \"kubernetes.io/projected/7b7b03b8-1c07-4cf9-bd3f-897d0c4e2e3c-kube-api-access-fwl2f\") pod \"cilium-operator-6c4d7847fc-dvnp9\" (UID: \"7b7b03b8-1c07-4cf9-bd3f-897d0c4e2e3c\") " pod="kube-system/cilium-operator-6c4d7847fc-dvnp9" Sep 6 01:24:46.356574 kubelet[1885]: I0906 01:24:46.356537 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cilium-run\") pod \"cilium-6q8hs\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " pod="kube-system/cilium-6q8hs" Sep 6 01:24:46.356691 kubelet[1885]: I0906 01:24:46.356678 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-etc-cni-netd\") pod \"cilium-6q8hs\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " pod="kube-system/cilium-6q8hs" Sep 6 01:24:46.356825 kubelet[1885]: I0906 01:24:46.356812 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-host-proc-sys-net\") pod \"cilium-6q8hs\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " pod="kube-system/cilium-6q8hs" Sep 6 01:24:46.356954 kubelet[1885]: I0906 01:24:46.356935 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cilium-cgroup\") pod \"cilium-6q8hs\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " pod="kube-system/cilium-6q8hs" Sep 6 01:24:46.357070 kubelet[1885]: I0906 01:24:46.357056 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-host-proc-sys-kernel\") pod \"cilium-6q8hs\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " pod="kube-system/cilium-6q8hs" Sep 6 01:24:46.357188 kubelet[1885]: I0906 01:24:46.357174 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zprn2\" (UniqueName: \"kubernetes.io/projected/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-kube-api-access-zprn2\") pod \"cilium-6q8hs\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " pod="kube-system/cilium-6q8hs" Sep 6 01:24:46.357299 kubelet[1885]: I0906 01:24:46.357286 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b7b03b8-1c07-4cf9-bd3f-897d0c4e2e3c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dvnp9\" (UID: \"7b7b03b8-1c07-4cf9-bd3f-897d0c4e2e3c\") " pod="kube-system/cilium-operator-6c4d7847fc-dvnp9" Sep 6 01:24:46.357404 kubelet[1885]: I0906 01:24:46.357392 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-hostproc\") pod \"cilium-6q8hs\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " pod="kube-system/cilium-6q8hs" Sep 6 01:24:46.357519 kubelet[1885]: I0906 01:24:46.357506 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-clustermesh-secrets\") pod \"cilium-6q8hs\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " pod="kube-system/cilium-6q8hs" Sep 6 01:24:46.640495 env[1476]: time="2025-09-06T01:24:46.640446425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dvnp9,Uid:7b7b03b8-1c07-4cf9-bd3f-897d0c4e2e3c,Namespace:kube-system,Attempt:0,}" Sep 6 01:24:46.651938 env[1476]: time="2025-09-06T01:24:46.651886491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6q8hs,Uid:467267fd-7cf5-468a-88a9-2c5ca4f2fb37,Namespace:kube-system,Attempt:0,}" Sep 6 01:24:46.687866 env[1476]: time="2025-09-06T01:24:46.687658474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:24:46.687866 env[1476]: time="2025-09-06T01:24:46.687719155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:24:46.687866 env[1476]: time="2025-09-06T01:24:46.687729755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:24:46.688454 env[1476]: time="2025-09-06T01:24:46.688394086Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a35e53df3b301d834b3c2e6c632ebdfd1e53bce3f413037d2b6fb458b505632 pid=3485 runtime=io.containerd.runc.v2 Sep 6 01:24:46.702595 systemd[1]: Started cri-containerd-7a35e53df3b301d834b3c2e6c632ebdfd1e53bce3f413037d2b6fb458b505632.scope. Sep 6 01:24:46.717836 env[1476]: time="2025-09-06T01:24:46.716678547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:24:46.717836 env[1476]: time="2025-09-06T01:24:46.716726748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:24:46.717836 env[1476]: time="2025-09-06T01:24:46.716779949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:24:46.717836 env[1476]: time="2025-09-06T01:24:46.716977672Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c74c12bce97e74cf2ec1f0ef97a33a64948a219d9c1c555a609dd9960f5d2f8b pid=3513 runtime=io.containerd.runc.v2 Sep 6 01:24:46.730522 systemd[1]: Started cri-containerd-c74c12bce97e74cf2ec1f0ef97a33a64948a219d9c1c555a609dd9960f5d2f8b.scope. Sep 6 01:24:46.753515 env[1476]: time="2025-09-06T01:24:46.753389225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dvnp9,Uid:7b7b03b8-1c07-4cf9-bd3f-897d0c4e2e3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a35e53df3b301d834b3c2e6c632ebdfd1e53bce3f413037d2b6fb458b505632\"" Sep 6 01:24:46.755330 env[1476]: time="2025-09-06T01:24:46.755215095Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 01:24:46.768609 env[1476]: time="2025-09-06T01:24:46.768553712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6q8hs,Uid:467267fd-7cf5-468a-88a9-2c5ca4f2fb37,Namespace:kube-system,Attempt:0,} returns sandbox id \"c74c12bce97e74cf2ec1f0ef97a33a64948a219d9c1c555a609dd9960f5d2f8b\"" Sep 6 01:24:46.771953 env[1476]: time="2025-09-06T01:24:46.771903047Z" level=info msg="CreateContainer within sandbox \"c74c12bce97e74cf2ec1f0ef97a33a64948a219d9c1c555a609dd9960f5d2f8b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 01:24:46.815251 env[1476]: time="2025-09-06T01:24:46.815186352Z" level=info msg="CreateContainer within sandbox \"c74c12bce97e74cf2ec1f0ef97a33a64948a219d9c1c555a609dd9960f5d2f8b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e03d93aa556a03cf2406a8dc0334fa9d0d2a75a1b3a2fcc38a8d619068140004\"" Sep 6 01:24:46.816053 env[1476]: time="2025-09-06T01:24:46.816017565Z" level=info msg="StartContainer for \"e03d93aa556a03cf2406a8dc0334fa9d0d2a75a1b3a2fcc38a8d619068140004\"" Sep 6 01:24:46.831702 systemd[1]: Started cri-containerd-e03d93aa556a03cf2406a8dc0334fa9d0d2a75a1b3a2fcc38a8d619068140004.scope. Sep 6 01:24:46.844153 systemd[1]: cri-containerd-e03d93aa556a03cf2406a8dc0334fa9d0d2a75a1b3a2fcc38a8d619068140004.scope: Deactivated successfully. Sep 6 01:24:46.884701 env[1476]: time="2025-09-06T01:24:46.884637483Z" level=info msg="shim disconnected" id=e03d93aa556a03cf2406a8dc0334fa9d0d2a75a1b3a2fcc38a8d619068140004 Sep 6 01:24:46.884701 env[1476]: time="2025-09-06T01:24:46.884695244Z" level=warning msg="cleaning up after shim disconnected" id=e03d93aa556a03cf2406a8dc0334fa9d0d2a75a1b3a2fcc38a8d619068140004 namespace=k8s.io Sep 6 01:24:46.884701 env[1476]: time="2025-09-06T01:24:46.884705404Z" level=info msg="cleaning up dead shim" Sep 6 01:24:46.892866 env[1476]: time="2025-09-06T01:24:46.892716015Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3587 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T01:24:46Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e03d93aa556a03cf2406a8dc0334fa9d0d2a75a1b3a2fcc38a8d619068140004/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 01:24:46.894112 env[1476]: time="2025-09-06T01:24:46.893982436Z" level=error msg="copy shim log" error="read /proc/self/fd/65: file already closed" Sep 6 01:24:46.894846 env[1476]: time="2025-09-06T01:24:46.894798849Z" level=error msg="Failed to pipe stdout of container \"e03d93aa556a03cf2406a8dc0334fa9d0d2a75a1b3a2fcc38a8d619068140004\"" error="reading from a closed fifo" Sep 6 01:24:46.896861 env[1476]: time="2025-09-06T01:24:46.896819762Z" level=error msg="Failed to pipe stderr of container \"e03d93aa556a03cf2406a8dc0334fa9d0d2a75a1b3a2fcc38a8d619068140004\"" error="reading from a closed fifo" Sep 6 01:24:46.902637 env[1476]: time="2025-09-06T01:24:46.902548815Z" level=error msg="StartContainer for \"e03d93aa556a03cf2406a8dc0334fa9d0d2a75a1b3a2fcc38a8d619068140004\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 01:24:46.903102 kubelet[1885]: E0906 01:24:46.903058 1885 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e03d93aa556a03cf2406a8dc0334fa9d0d2a75a1b3a2fcc38a8d619068140004" Sep 6 01:24:46.903252 kubelet[1885]: E0906 01:24:46.903224 1885 kuberuntime_manager.go:1341] "Unhandled Error" err=< Sep 6 01:24:46.903252 kubelet[1885]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 01:24:46.903252 kubelet[1885]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 01:24:46.903252 kubelet[1885]: rm /hostbin/cilium-mount Sep 6 01:24:46.903379 kubelet[1885]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zprn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-6q8hs_kube-system(467267fd-7cf5-468a-88a9-2c5ca4f2fb37): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 01:24:46.903379 kubelet[1885]: > logger="UnhandledError" Sep 6 01:24:46.904393 kubelet[1885]: E0906 01:24:46.904357 1885 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-6q8hs" podUID="467267fd-7cf5-468a-88a9-2c5ca4f2fb37" Sep 6 01:24:47.259518 kubelet[1885]: E0906 01:24:47.256644 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:47.455820 env[1476]: time="2025-09-06T01:24:47.455768588Z" level=info msg="StopPodSandbox for \"c74c12bce97e74cf2ec1f0ef97a33a64948a219d9c1c555a609dd9960f5d2f8b\"" Sep 6 01:24:47.456000 env[1476]: time="2025-09-06T01:24:47.455841109Z" level=info msg="Container to stop \"e03d93aa556a03cf2406a8dc0334fa9d0d2a75a1b3a2fcc38a8d619068140004\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:47.467900 systemd[1]: cri-containerd-c74c12bce97e74cf2ec1f0ef97a33a64948a219d9c1c555a609dd9960f5d2f8b.scope: Deactivated successfully. Sep 6 01:24:47.491422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c74c12bce97e74cf2ec1f0ef97a33a64948a219d9c1c555a609dd9960f5d2f8b-rootfs.mount: Deactivated successfully. Sep 6 01:24:47.506974 env[1476]: time="2025-09-06T01:24:47.506919084Z" level=info msg="shim disconnected" id=c74c12bce97e74cf2ec1f0ef97a33a64948a219d9c1c555a609dd9960f5d2f8b Sep 6 01:24:47.506974 env[1476]: time="2025-09-06T01:24:47.506970084Z" level=warning msg="cleaning up after shim disconnected" id=c74c12bce97e74cf2ec1f0ef97a33a64948a219d9c1c555a609dd9960f5d2f8b namespace=k8s.io Sep 6 01:24:47.506974 env[1476]: time="2025-09-06T01:24:47.506980085Z" level=info msg="cleaning up dead shim" Sep 6 01:24:47.515334 env[1476]: time="2025-09-06T01:24:47.514711648Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3619 runtime=io.containerd.runc.v2\n" Sep 6 01:24:47.515334 env[1476]: time="2025-09-06T01:24:47.515087174Z" level=info msg="TearDown network for sandbox \"c74c12bce97e74cf2ec1f0ef97a33a64948a219d9c1c555a609dd9960f5d2f8b\" successfully" Sep 6 01:24:47.515334 env[1476]: time="2025-09-06T01:24:47.515111854Z" level=info msg="StopPodSandbox for \"c74c12bce97e74cf2ec1f0ef97a33a64948a219d9c1c555a609dd9960f5d2f8b\" returns successfully" Sep 6 01:24:47.566845 kubelet[1885]: I0906 01:24:47.566796 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cilium-ipsec-secrets\") pod \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " Sep 6 01:24:47.566845 kubelet[1885]: I0906 01:24:47.566843 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-xtables-lock\") pod \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " Sep 6 01:24:47.566845 kubelet[1885]: I0906 01:24:47.566860 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-etc-cni-netd\") pod \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " Sep 6 01:24:47.567086 kubelet[1885]: I0906 01:24:47.566879 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-host-proc-sys-kernel\") pod \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " Sep 6 01:24:47.567086 kubelet[1885]: I0906 01:24:47.566897 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-hubble-tls\") pod \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " Sep 6 01:24:47.567086 kubelet[1885]: I0906 01:24:47.566910 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-bpf-maps\") pod \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " Sep 6 01:24:47.567086 kubelet[1885]: I0906 01:24:47.566924 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cni-path\") pod \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " Sep 6 01:24:47.567086 kubelet[1885]: I0906 01:24:47.566939 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-hostproc\") pod \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " Sep 6 01:24:47.567086 kubelet[1885]: I0906 01:24:47.566956 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zprn2\" (UniqueName: \"kubernetes.io/projected/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-kube-api-access-zprn2\") pod \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " Sep 6 01:24:47.567086 kubelet[1885]: I0906 01:24:47.566976 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-clustermesh-secrets\") pod \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " Sep 6 01:24:47.567086 kubelet[1885]: I0906 01:24:47.566995 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cilium-config-path\") pod \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " Sep 6 01:24:47.567086 kubelet[1885]: I0906 01:24:47.567011 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cilium-cgroup\") pod \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " Sep 6 01:24:47.567086 kubelet[1885]: I0906 01:24:47.567029 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-host-proc-sys-net\") pod \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " Sep 6 01:24:47.567086 kubelet[1885]: I0906 01:24:47.567044 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-lib-modules\") pod \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " Sep 6 01:24:47.567086 kubelet[1885]: I0906 01:24:47.567058 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cilium-run\") pod \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\" (UID: \"467267fd-7cf5-468a-88a9-2c5ca4f2fb37\") " Sep 6 01:24:47.567357 kubelet[1885]: I0906 01:24:47.567134 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "467267fd-7cf5-468a-88a9-2c5ca4f2fb37" (UID: "467267fd-7cf5-468a-88a9-2c5ca4f2fb37"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:47.567509 kubelet[1885]: I0906 01:24:47.567474 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-hostproc" (OuterVolumeSpecName: "hostproc") pod "467267fd-7cf5-468a-88a9-2c5ca4f2fb37" (UID: "467267fd-7cf5-468a-88a9-2c5ca4f2fb37"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:47.567601 kubelet[1885]: I0906 01:24:47.567583 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "467267fd-7cf5-468a-88a9-2c5ca4f2fb37" (UID: "467267fd-7cf5-468a-88a9-2c5ca4f2fb37"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:47.567693 kubelet[1885]: I0906 01:24:47.567681 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "467267fd-7cf5-468a-88a9-2c5ca4f2fb37" (UID: "467267fd-7cf5-468a-88a9-2c5ca4f2fb37"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:47.571643 systemd[1]: var-lib-kubelet-pods-467267fd\x2d7cf5\x2d468a\x2d88a9\x2d2c5ca4f2fb37-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 01:24:47.574502 systemd[1]: var-lib-kubelet-pods-467267fd\x2d7cf5\x2d468a\x2d88a9\x2d2c5ca4f2fb37-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 01:24:47.575196 kubelet[1885]: I0906 01:24:47.567873 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "467267fd-7cf5-468a-88a9-2c5ca4f2fb37" (UID: "467267fd-7cf5-468a-88a9-2c5ca4f2fb37"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:47.575322 kubelet[1885]: I0906 01:24:47.568397 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "467267fd-7cf5-468a-88a9-2c5ca4f2fb37" (UID: "467267fd-7cf5-468a-88a9-2c5ca4f2fb37"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:47.575392 kubelet[1885]: I0906 01:24:47.568418 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cni-path" (OuterVolumeSpecName: "cni-path") pod "467267fd-7cf5-468a-88a9-2c5ca4f2fb37" (UID: "467267fd-7cf5-468a-88a9-2c5ca4f2fb37"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:47.575519 kubelet[1885]: I0906 01:24:47.575489 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "467267fd-7cf5-468a-88a9-2c5ca4f2fb37" (UID: "467267fd-7cf5-468a-88a9-2c5ca4f2fb37"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:47.577619 kubelet[1885]: I0906 01:24:47.577572 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "467267fd-7cf5-468a-88a9-2c5ca4f2fb37" (UID: "467267fd-7cf5-468a-88a9-2c5ca4f2fb37"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 01:24:47.578140 kubelet[1885]: I0906 01:24:47.577895 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "467267fd-7cf5-468a-88a9-2c5ca4f2fb37" (UID: "467267fd-7cf5-468a-88a9-2c5ca4f2fb37"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:47.578253 kubelet[1885]: I0906 01:24:47.577926 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "467267fd-7cf5-468a-88a9-2c5ca4f2fb37" (UID: "467267fd-7cf5-468a-88a9-2c5ca4f2fb37"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:47.578983 kubelet[1885]: I0906 01:24:47.578956 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "467267fd-7cf5-468a-88a9-2c5ca4f2fb37" (UID: "467267fd-7cf5-468a-88a9-2c5ca4f2fb37"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:24:47.579658 systemd[1]: var-lib-kubelet-pods-467267fd\x2d7cf5\x2d468a\x2d88a9\x2d2c5ca4f2fb37-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 01:24:47.583012 kubelet[1885]: I0906 01:24:47.582967 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "467267fd-7cf5-468a-88a9-2c5ca4f2fb37" (UID: "467267fd-7cf5-468a-88a9-2c5ca4f2fb37"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 01:24:47.583187 kubelet[1885]: I0906 01:24:47.583082 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "467267fd-7cf5-468a-88a9-2c5ca4f2fb37" (UID: "467267fd-7cf5-468a-88a9-2c5ca4f2fb37"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 01:24:47.583359 kubelet[1885]: I0906 01:24:47.583321 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-kube-api-access-zprn2" (OuterVolumeSpecName: "kube-api-access-zprn2") pod "467267fd-7cf5-468a-88a9-2c5ca4f2fb37" (UID: "467267fd-7cf5-468a-88a9-2c5ca4f2fb37"). InnerVolumeSpecName "kube-api-access-zprn2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:24:47.667584 kubelet[1885]: I0906 01:24:47.667525 1885 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zprn2\" (UniqueName: \"kubernetes.io/projected/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-kube-api-access-zprn2\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:47.667584 kubelet[1885]: I0906 01:24:47.667577 1885 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-clustermesh-secrets\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:47.667584 kubelet[1885]: I0906 01:24:47.667590 1885 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cilium-config-path\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:47.667872 kubelet[1885]: I0906 01:24:47.667606 1885 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cilium-cgroup\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:47.667872 kubelet[1885]: I0906 01:24:47.667628 1885 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-host-proc-sys-net\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:47.667872 kubelet[1885]: I0906 01:24:47.667637 1885 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-lib-modules\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:47.667872 kubelet[1885]: I0906 01:24:47.667644 1885 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cilium-run\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:47.667872 kubelet[1885]: I0906 01:24:47.667652 1885 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cilium-ipsec-secrets\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:47.667872 kubelet[1885]: I0906 01:24:47.667668 1885 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-xtables-lock\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:47.667872 kubelet[1885]: I0906 01:24:47.667677 1885 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-etc-cni-netd\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:47.667872 kubelet[1885]: I0906 01:24:47.667685 1885 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-host-proc-sys-kernel\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:47.667872 kubelet[1885]: I0906 01:24:47.667694 1885 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-hubble-tls\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:47.667872 kubelet[1885]: I0906 01:24:47.667718 1885 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-bpf-maps\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:47.667872 kubelet[1885]: I0906 01:24:47.667727 1885 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-cni-path\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:47.667872 kubelet[1885]: I0906 01:24:47.667769 1885 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/467267fd-7cf5-468a-88a9-2c5ca4f2fb37-hostproc\") on node \"10.200.20.4\" DevicePath \"\"" Sep 6 01:24:47.926309 env[1476]: time="2025-09-06T01:24:47.926225167Z" level=error msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" failed" error="failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/59/59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTCKFFWO5C%2F20250906%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250906T012447Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=e55447cdf5ec2757e22667295879d59513b69a11ad765d50debb6beed58011e6®ion=us-east-1&namespace=cilium&repo_name=operator-generic&akamai_signature=exp=1757122787~hmac=11e9be2152f6d9b6bd92a433579488327c65c5c758e9a780eaf1250c9df21a04\": dial tcp: lookup cdn01.quay.io: no such host" Sep 6 01:24:47.927131 kubelet[1885]: E0906 01:24:47.927037 1885 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/59/59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTCKFFWO5C%2F20250906%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250906T012447Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=e55447cdf5ec2757e22667295879d59513b69a11ad765d50debb6beed58011e6®ion=us-east-1&namespace=cilium&repo_name=operator-generic&akamai_signature=exp=1757122787~hmac=11e9be2152f6d9b6bd92a433579488327c65c5c758e9a780eaf1250c9df21a04\": dial tcp: lookup cdn01.quay.io: no such host" image="quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e" Sep 6 01:24:47.927265 kubelet[1885]: E0906 01:24:47.927138 1885 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/59/59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTCKFFWO5C%2F20250906%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250906T012447Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=e55447cdf5ec2757e22667295879d59513b69a11ad765d50debb6beed58011e6®ion=us-east-1&namespace=cilium&repo_name=operator-generic&akamai_signature=exp=1757122787~hmac=11e9be2152f6d9b6bd92a433579488327c65c5c758e9a780eaf1250c9df21a04\": dial tcp: lookup cdn01.quay.io: no such host" image="quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e" Sep 6 01:24:47.927372 kubelet[1885]: E0906 01:24:47.927314 1885 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:cilium-operator,Image:quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Command:[cilium-operator-generic],Args:[--config-dir=/tmp/cilium/config-map --debug=$(CILIUM_DEBUG)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:K8S_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CILIUM_K8S_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CILIUM_DEBUG,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:cilium-config,},Key:debug,Optional:*true,},SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cilium-config-path,ReadOnly:true,MountPath:/tmp/cilium/config-map,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwl2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 9234 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:3,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-operator-6c4d7847fc-dvnp9_kube-system(7b7b03b8-1c07-4cf9-bd3f-897d0c4e2e3c): ErrImagePull: failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/59/59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTCKFFWO5C%2F20250906%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250906T012447Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=e55447cdf5ec2757e22667295879d59513b69a11ad765d50debb6beed58011e6®ion=us-east-1&namespace=cilium&repo_name=operator-generic&akamai_signature=exp=1757122787~hmac=11e9be2152f6d9b6bd92a433579488327c65c5c758e9a780eaf1250c9df21a04\": dial tcp: lookup cdn01.quay.io: no such host" logger="UnhandledError" Sep 6 01:24:47.928683 kubelet[1885]: E0906 01:24:47.928635 1885 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cilium-operator\" with ErrImagePull: \"failed to pull and unpack image \\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \\\"https://cdn01.quay.io/quayio-production-s3/sha256/59/59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTCKFFWO5C%2F20250906%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250906T012447Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=e55447cdf5ec2757e22667295879d59513b69a11ad765d50debb6beed58011e6®ion=us-east-1&namespace=cilium&repo_name=operator-generic&akamai_signature=exp=1757122787~hmac=11e9be2152f6d9b6bd92a433579488327c65c5c758e9a780eaf1250c9df21a04\\\": dial tcp: lookup cdn01.quay.io: no such host\"" pod="kube-system/cilium-operator-6c4d7847fc-dvnp9" podUID="7b7b03b8-1c07-4cf9-bd3f-897d0c4e2e3c" Sep 6 01:24:48.258048 kubelet[1885]: E0906 01:24:48.257330 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:48.462826 systemd[1]: var-lib-kubelet-pods-467267fd\x2d7cf5\x2d468a\x2d88a9\x2d2c5ca4f2fb37-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzprn2.mount: Deactivated successfully. Sep 6 01:24:48.465063 kubelet[1885]: I0906 01:24:48.465014 1885 scope.go:117] "RemoveContainer" containerID="e03d93aa556a03cf2406a8dc0334fa9d0d2a75a1b3a2fcc38a8d619068140004" Sep 6 01:24:48.465975 kubelet[1885]: E0906 01:24:48.465904 1885 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cilium-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\": ErrImagePull: failed to pull and unpack image \\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \\\"https://cdn01.quay.io/quayio-production-s3/sha256/59/59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTCKFFWO5C%2F20250906%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250906T012447Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=e55447cdf5ec2757e22667295879d59513b69a11ad765d50debb6beed58011e6®ion=us-east-1&namespace=cilium&repo_name=operator-generic&akamai_signature=exp=1757122787~hmac=11e9be2152f6d9b6bd92a433579488327c65c5c758e9a780eaf1250c9df21a04\\\": dial tcp: lookup cdn01.quay.io: no such host\"" pod="kube-system/cilium-operator-6c4d7847fc-dvnp9" podUID="7b7b03b8-1c07-4cf9-bd3f-897d0c4e2e3c" Sep 6 01:24:48.467592 env[1476]: time="2025-09-06T01:24:48.467545998Z" level=info msg="RemoveContainer for \"e03d93aa556a03cf2406a8dc0334fa9d0d2a75a1b3a2fcc38a8d619068140004\"" Sep 6 01:24:48.469032 systemd[1]: Removed slice kubepods-burstable-pod467267fd_7cf5_468a_88a9_2c5ca4f2fb37.slice. Sep 6 01:24:48.480322 env[1476]: time="2025-09-06T01:24:48.480270277Z" level=info msg="RemoveContainer for \"e03d93aa556a03cf2406a8dc0334fa9d0d2a75a1b3a2fcc38a8d619068140004\" returns successfully" Sep 6 01:24:48.540437 kubelet[1885]: I0906 01:24:48.539878 1885 memory_manager.go:355] "RemoveStaleState removing state" podUID="467267fd-7cf5-468a-88a9-2c5ca4f2fb37" containerName="mount-cgroup" Sep 6 01:24:48.545957 systemd[1]: Created slice kubepods-burstable-pod6e8af50d_54ae_4551_9924_9bf658fae4cc.slice. Sep 6 01:24:48.573519 kubelet[1885]: I0906 01:24:48.573470 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e8af50d-54ae-4551-9924-9bf658fae4cc-cilium-run\") pod \"cilium-554xk\" (UID: \"6e8af50d-54ae-4551-9924-9bf658fae4cc\") " pod="kube-system/cilium-554xk" Sep 6 01:24:48.573727 kubelet[1885]: I0906 01:24:48.573712 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e8af50d-54ae-4551-9924-9bf658fae4cc-clustermesh-secrets\") pod \"cilium-554xk\" (UID: \"6e8af50d-54ae-4551-9924-9bf658fae4cc\") " pod="kube-system/cilium-554xk" Sep 6 01:24:48.573865 kubelet[1885]: I0906 01:24:48.573851 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6e8af50d-54ae-4551-9924-9bf658fae4cc-cilium-ipsec-secrets\") pod \"cilium-554xk\" (UID: \"6e8af50d-54ae-4551-9924-9bf658fae4cc\") " pod="kube-system/cilium-554xk" Sep 6 01:24:48.573976 kubelet[1885]: I0906 01:24:48.573963 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e8af50d-54ae-4551-9924-9bf658fae4cc-host-proc-sys-net\") pod \"cilium-554xk\" (UID: \"6e8af50d-54ae-4551-9924-9bf658fae4cc\") " pod="kube-system/cilium-554xk" Sep 6 01:24:48.574091 kubelet[1885]: I0906 01:24:48.574080 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e8af50d-54ae-4551-9924-9bf658fae4cc-host-proc-sys-kernel\") pod \"cilium-554xk\" (UID: \"6e8af50d-54ae-4551-9924-9bf658fae4cc\") " pod="kube-system/cilium-554xk" Sep 6 01:24:48.574209 kubelet[1885]: I0906 01:24:48.574185 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e8af50d-54ae-4551-9924-9bf658fae4cc-xtables-lock\") pod \"cilium-554xk\" (UID: \"6e8af50d-54ae-4551-9924-9bf658fae4cc\") " pod="kube-system/cilium-554xk" Sep 6 01:24:48.574310 kubelet[1885]: I0906 01:24:48.574297 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xfql\" (UniqueName: \"kubernetes.io/projected/6e8af50d-54ae-4551-9924-9bf658fae4cc-kube-api-access-4xfql\") pod \"cilium-554xk\" (UID: \"6e8af50d-54ae-4551-9924-9bf658fae4cc\") " pod="kube-system/cilium-554xk" Sep 6 01:24:48.574435 kubelet[1885]: I0906 01:24:48.574422 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e8af50d-54ae-4551-9924-9bf658fae4cc-cni-path\") pod \"cilium-554xk\" (UID: \"6e8af50d-54ae-4551-9924-9bf658fae4cc\") " pod="kube-system/cilium-554xk" Sep 6 01:24:48.574533 kubelet[1885]: I0906 01:24:48.574522 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e8af50d-54ae-4551-9924-9bf658fae4cc-lib-modules\") pod \"cilium-554xk\" (UID: \"6e8af50d-54ae-4551-9924-9bf658fae4cc\") " pod="kube-system/cilium-554xk" Sep 6 01:24:48.574636 kubelet[1885]: I0906 01:24:48.574624 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e8af50d-54ae-4551-9924-9bf658fae4cc-bpf-maps\") pod \"cilium-554xk\" (UID: \"6e8af50d-54ae-4551-9924-9bf658fae4cc\") " pod="kube-system/cilium-554xk" Sep 6 01:24:48.574760 kubelet[1885]: I0906 01:24:48.574725 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e8af50d-54ae-4551-9924-9bf658fae4cc-hostproc\") pod \"cilium-554xk\" (UID: \"6e8af50d-54ae-4551-9924-9bf658fae4cc\") " pod="kube-system/cilium-554xk" Sep 6 01:24:48.574851 kubelet[1885]: I0906 01:24:48.574839 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e8af50d-54ae-4551-9924-9bf658fae4cc-cilium-cgroup\") pod \"cilium-554xk\" (UID: \"6e8af50d-54ae-4551-9924-9bf658fae4cc\") " pod="kube-system/cilium-554xk" Sep 6 01:24:48.574957 kubelet[1885]: I0906 01:24:48.574939 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e8af50d-54ae-4551-9924-9bf658fae4cc-etc-cni-netd\") pod \"cilium-554xk\" (UID: \"6e8af50d-54ae-4551-9924-9bf658fae4cc\") " pod="kube-system/cilium-554xk" Sep 6 01:24:48.575075 kubelet[1885]: I0906 01:24:48.575059 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e8af50d-54ae-4551-9924-9bf658fae4cc-cilium-config-path\") pod \"cilium-554xk\" (UID: \"6e8af50d-54ae-4551-9924-9bf658fae4cc\") " pod="kube-system/cilium-554xk" Sep 6 01:24:48.575187 kubelet[1885]: I0906 01:24:48.575175 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e8af50d-54ae-4551-9924-9bf658fae4cc-hubble-tls\") pod \"cilium-554xk\" (UID: \"6e8af50d-54ae-4551-9924-9bf658fae4cc\") " pod="kube-system/cilium-554xk" Sep 6 01:24:48.856139 env[1476]: time="2025-09-06T01:24:48.856026538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-554xk,Uid:6e8af50d-54ae-4551-9924-9bf658fae4cc,Namespace:kube-system,Attempt:0,}" Sep 6 01:24:48.890359 env[1476]: time="2025-09-06T01:24:48.890268632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:24:48.890589 env[1476]: time="2025-09-06T01:24:48.890563396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:24:48.890697 env[1476]: time="2025-09-06T01:24:48.890675598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:24:48.891102 env[1476]: time="2025-09-06T01:24:48.891045284Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ab43c77495556721bcaac67fff6fc227e96b70b245d67e89b18758531043e93 pid=3649 runtime=io.containerd.runc.v2 Sep 6 01:24:48.902684 systemd[1]: Started cri-containerd-2ab43c77495556721bcaac67fff6fc227e96b70b245d67e89b18758531043e93.scope. Sep 6 01:24:48.925970 env[1476]: time="2025-09-06T01:24:48.925718425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-554xk,Uid:6e8af50d-54ae-4551-9924-9bf658fae4cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ab43c77495556721bcaac67fff6fc227e96b70b245d67e89b18758531043e93\"" Sep 6 01:24:48.931200 env[1476]: time="2025-09-06T01:24:48.931077268Z" level=info msg="CreateContainer within sandbox \"2ab43c77495556721bcaac67fff6fc227e96b70b245d67e89b18758531043e93\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 01:24:48.974141 env[1476]: time="2025-09-06T01:24:48.974089699Z" level=info msg="CreateContainer within sandbox \"2ab43c77495556721bcaac67fff6fc227e96b70b245d67e89b18758531043e93\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"32d73ffbdf9b5a9be3929b9315813c69f5427fc42536d6cf9053ceed82c7b03c\"" Sep 6 01:24:48.975169 env[1476]: time="2025-09-06T01:24:48.975133836Z" level=info msg="StartContainer for \"32d73ffbdf9b5a9be3929b9315813c69f5427fc42536d6cf9053ceed82c7b03c\"" Sep 6 01:24:48.990361 systemd[1]: Started cri-containerd-32d73ffbdf9b5a9be3929b9315813c69f5427fc42536d6cf9053ceed82c7b03c.scope. Sep 6 01:24:49.023570 env[1476]: time="2025-09-06T01:24:49.023502343Z" level=info msg="StartContainer for \"32d73ffbdf9b5a9be3929b9315813c69f5427fc42536d6cf9053ceed82c7b03c\" returns successfully" Sep 6 01:24:49.026506 systemd[1]: cri-containerd-32d73ffbdf9b5a9be3929b9315813c69f5427fc42536d6cf9053ceed82c7b03c.scope: Deactivated successfully. Sep 6 01:24:49.084598 env[1476]: time="2025-09-06T01:24:49.084548075Z" level=info msg="shim disconnected" id=32d73ffbdf9b5a9be3929b9315813c69f5427fc42536d6cf9053ceed82c7b03c Sep 6 01:24:49.084935 env[1476]: time="2025-09-06T01:24:49.084910000Z" level=warning msg="cleaning up after shim disconnected" id=32d73ffbdf9b5a9be3929b9315813c69f5427fc42536d6cf9053ceed82c7b03c namespace=k8s.io Sep 6 01:24:49.085027 env[1476]: time="2025-09-06T01:24:49.085013682Z" level=info msg="cleaning up dead shim" Sep 6 01:24:49.093041 env[1476]: time="2025-09-06T01:24:49.092992484Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3735 runtime=io.containerd.runc.v2\n" Sep 6 01:24:49.257928 kubelet[1885]: E0906 01:24:49.257793 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:49.315920 kubelet[1885]: I0906 01:24:49.315882 1885 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="467267fd-7cf5-468a-88a9-2c5ca4f2fb37" path="/var/lib/kubelet/pods/467267fd-7cf5-468a-88a9-2c5ca4f2fb37/volumes" Sep 6 01:24:49.328999 kubelet[1885]: E0906 01:24:49.328950 1885 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 01:24:49.469777 env[1476]: time="2025-09-06T01:24:49.469697955Z" level=info msg="CreateContainer within sandbox \"2ab43c77495556721bcaac67fff6fc227e96b70b245d67e89b18758531043e93\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 01:24:49.507795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount808221406.mount: Deactivated successfully. Sep 6 01:24:49.524325 env[1476]: time="2025-09-06T01:24:49.524267948Z" level=info msg="CreateContainer within sandbox \"2ab43c77495556721bcaac67fff6fc227e96b70b245d67e89b18758531043e93\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d6da507cb60a7c850d731b86523a2ec3cd7ce5fbf4fc3aa0fd039d7c878a0c8f\"" Sep 6 01:24:49.525473 env[1476]: time="2025-09-06T01:24:49.525421166Z" level=info msg="StartContainer for \"d6da507cb60a7c850d731b86523a2ec3cd7ce5fbf4fc3aa0fd039d7c878a0c8f\"" Sep 6 01:24:49.541684 systemd[1]: Started cri-containerd-d6da507cb60a7c850d731b86523a2ec3cd7ce5fbf4fc3aa0fd039d7c878a0c8f.scope. Sep 6 01:24:49.575193 env[1476]: time="2025-09-06T01:24:49.575140085Z" level=info msg="StartContainer for \"d6da507cb60a7c850d731b86523a2ec3cd7ce5fbf4fc3aa0fd039d7c878a0c8f\" returns successfully" Sep 6 01:24:49.576727 systemd[1]: cri-containerd-d6da507cb60a7c850d731b86523a2ec3cd7ce5fbf4fc3aa0fd039d7c878a0c8f.scope: Deactivated successfully. Sep 6 01:24:49.610787 env[1476]: time="2025-09-06T01:24:49.610712348Z" level=info msg="shim disconnected" id=d6da507cb60a7c850d731b86523a2ec3cd7ce5fbf4fc3aa0fd039d7c878a0c8f Sep 6 01:24:49.610787 env[1476]: time="2025-09-06T01:24:49.610780869Z" level=warning msg="cleaning up after shim disconnected" id=d6da507cb60a7c850d731b86523a2ec3cd7ce5fbf4fc3aa0fd039d7c878a0c8f namespace=k8s.io Sep 6 01:24:49.610787 env[1476]: time="2025-09-06T01:24:49.610791229Z" level=info msg="cleaning up dead shim" Sep 6 01:24:49.618919 env[1476]: time="2025-09-06T01:24:49.618861953Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3797 runtime=io.containerd.runc.v2\n" Sep 6 01:24:49.991775 kubelet[1885]: W0906 01:24:49.991143 1885 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod467267fd_7cf5_468a_88a9_2c5ca4f2fb37.slice/cri-containerd-e03d93aa556a03cf2406a8dc0334fa9d0d2a75a1b3a2fcc38a8d619068140004.scope WatchSource:0}: container "e03d93aa556a03cf2406a8dc0334fa9d0d2a75a1b3a2fcc38a8d619068140004" in namespace "k8s.io": not found Sep 6 01:24:50.258099 kubelet[1885]: E0906 01:24:50.257950 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:50.475760 env[1476]: time="2025-09-06T01:24:50.475697883Z" level=info msg="CreateContainer within sandbox \"2ab43c77495556721bcaac67fff6fc227e96b70b245d67e89b18758531043e93\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 01:24:50.508837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1897346536.mount: Deactivated successfully. Sep 6 01:24:50.515795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2326752182.mount: Deactivated successfully. Sep 6 01:24:50.535340 env[1476]: time="2025-09-06T01:24:50.535284333Z" level=info msg="CreateContainer within sandbox \"2ab43c77495556721bcaac67fff6fc227e96b70b245d67e89b18758531043e93\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"52a0e1b8c34b6a3d89a758a58da06dfe075f03350315a2afeeca66f329bfff7f\"" Sep 6 01:24:50.536146 env[1476]: time="2025-09-06T01:24:50.536075505Z" level=info msg="StartContainer for \"52a0e1b8c34b6a3d89a758a58da06dfe075f03350315a2afeeca66f329bfff7f\"" Sep 6 01:24:50.552653 systemd[1]: Started cri-containerd-52a0e1b8c34b6a3d89a758a58da06dfe075f03350315a2afeeca66f329bfff7f.scope. Sep 6 01:24:50.586334 systemd[1]: cri-containerd-52a0e1b8c34b6a3d89a758a58da06dfe075f03350315a2afeeca66f329bfff7f.scope: Deactivated successfully. Sep 6 01:24:50.588661 env[1476]: time="2025-09-06T01:24:50.588531529Z" level=info msg="StartContainer for \"52a0e1b8c34b6a3d89a758a58da06dfe075f03350315a2afeeca66f329bfff7f\" returns successfully" Sep 6 01:24:50.623266 env[1476]: time="2025-09-06T01:24:50.623193447Z" level=info msg="shim disconnected" id=52a0e1b8c34b6a3d89a758a58da06dfe075f03350315a2afeeca66f329bfff7f Sep 6 01:24:50.623266 env[1476]: time="2025-09-06T01:24:50.623256128Z" level=warning msg="cleaning up after shim disconnected" id=52a0e1b8c34b6a3d89a758a58da06dfe075f03350315a2afeeca66f329bfff7f namespace=k8s.io Sep 6 01:24:50.623266 env[1476]: time="2025-09-06T01:24:50.623268408Z" level=info msg="cleaning up dead shim" Sep 6 01:24:50.631330 env[1476]: time="2025-09-06T01:24:50.631275608Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3857 runtime=io.containerd.runc.v2\n" Sep 6 01:24:50.948767 kubelet[1885]: I0906 01:24:50.948486 1885 setters.go:602] "Node became not ready" node="10.200.20.4" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T01:24:50Z","lastTransitionTime":"2025-09-06T01:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 01:24:51.259217 kubelet[1885]: E0906 01:24:51.258859 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:51.480862 env[1476]: time="2025-09-06T01:24:51.480815638Z" level=info msg="CreateContainer within sandbox \"2ab43c77495556721bcaac67fff6fc227e96b70b245d67e89b18758531043e93\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 01:24:51.522393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3778026736.mount: Deactivated successfully. Sep 6 01:24:51.535906 env[1476]: time="2025-09-06T01:24:51.535817723Z" level=info msg="CreateContainer within sandbox \"2ab43c77495556721bcaac67fff6fc227e96b70b245d67e89b18758531043e93\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0457c9806065970c5108bc5a4ba416a0624a87b358db97187e18947637259870\"" Sep 6 01:24:51.536849 env[1476]: time="2025-09-06T01:24:51.536811378Z" level=info msg="StartContainer for \"0457c9806065970c5108bc5a4ba416a0624a87b358db97187e18947637259870\"" Sep 6 01:24:51.552598 systemd[1]: Started cri-containerd-0457c9806065970c5108bc5a4ba416a0624a87b358db97187e18947637259870.scope. Sep 6 01:24:51.579520 systemd[1]: cri-containerd-0457c9806065970c5108bc5a4ba416a0624a87b358db97187e18947637259870.scope: Deactivated successfully. Sep 6 01:24:51.582592 env[1476]: time="2025-09-06T01:24:51.581714555Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e8af50d_54ae_4551_9924_9bf658fae4cc.slice/cri-containerd-0457c9806065970c5108bc5a4ba416a0624a87b358db97187e18947637259870.scope/memory.events\": no such file or directory" Sep 6 01:24:51.587917 env[1476]: time="2025-09-06T01:24:51.587851285Z" level=info msg="StartContainer for \"0457c9806065970c5108bc5a4ba416a0624a87b358db97187e18947637259870\" returns successfully" Sep 6 01:24:51.620946 env[1476]: time="2025-09-06T01:24:51.620895128Z" level=info msg="shim disconnected" id=0457c9806065970c5108bc5a4ba416a0624a87b358db97187e18947637259870 Sep 6 01:24:51.621286 env[1476]: time="2025-09-06T01:24:51.621258934Z" level=warning msg="cleaning up after shim disconnected" id=0457c9806065970c5108bc5a4ba416a0624a87b358db97187e18947637259870 namespace=k8s.io Sep 6 01:24:51.621390 env[1476]: time="2025-09-06T01:24:51.621374895Z" level=info msg="cleaning up dead shim" Sep 6 01:24:51.628824 env[1476]: time="2025-09-06T01:24:51.628776444Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3914 runtime=io.containerd.runc.v2\n" Sep 6 01:24:52.259656 kubelet[1885]: E0906 01:24:52.259602 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:52.483487 env[1476]: time="2025-09-06T01:24:52.483434810Z" level=info msg="CreateContainer within sandbox \"2ab43c77495556721bcaac67fff6fc227e96b70b245d67e89b18758531043e93\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 01:24:52.519044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2397860384.mount: Deactivated successfully. Sep 6 01:24:52.525875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount616948494.mount: Deactivated successfully. Sep 6 01:24:52.544368 env[1476]: time="2025-09-06T01:24:52.544317122Z" level=info msg="CreateContainer within sandbox \"2ab43c77495556721bcaac67fff6fc227e96b70b245d67e89b18758531043e93\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9c5a91e9388130a0ceccd09ffb743fc1a908400d9316a9a276773661837186da\"" Sep 6 01:24:52.545402 env[1476]: time="2025-09-06T01:24:52.545363537Z" level=info msg="StartContainer for \"9c5a91e9388130a0ceccd09ffb743fc1a908400d9316a9a276773661837186da\"" Sep 6 01:24:52.560761 systemd[1]: Started cri-containerd-9c5a91e9388130a0ceccd09ffb743fc1a908400d9316a9a276773661837186da.scope. Sep 6 01:24:52.600721 env[1476]: time="2025-09-06T01:24:52.600670930Z" level=info msg="StartContainer for \"9c5a91e9388130a0ceccd09ffb743fc1a908400d9316a9a276773661837186da\" returns successfully" Sep 6 01:24:52.921956 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 6 01:24:53.104008 kubelet[1885]: W0906 01:24:53.103411 1885 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e8af50d_54ae_4551_9924_9bf658fae4cc.slice/cri-containerd-32d73ffbdf9b5a9be3929b9315813c69f5427fc42536d6cf9053ceed82c7b03c.scope WatchSource:0}: task 32d73ffbdf9b5a9be3929b9315813c69f5427fc42536d6cf9053ceed82c7b03c not found: not found Sep 6 01:24:53.260008 kubelet[1885]: E0906 01:24:53.259879 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:54.260712 kubelet[1885]: E0906 01:24:54.260657 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:54.367662 systemd[1]: run-containerd-runc-k8s.io-9c5a91e9388130a0ceccd09ffb743fc1a908400d9316a9a276773661837186da-runc.MLB2jA.mount: Deactivated successfully. Sep 6 01:24:55.261686 kubelet[1885]: E0906 01:24:55.261640 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:55.687514 systemd-networkd[1626]: lxc_health: Link UP Sep 6 01:24:55.704791 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 01:24:55.703967 systemd-networkd[1626]: lxc_health: Gained carrier Sep 6 01:24:56.212458 kubelet[1885]: W0906 01:24:56.211922 1885 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e8af50d_54ae_4551_9924_9bf658fae4cc.slice/cri-containerd-d6da507cb60a7c850d731b86523a2ec3cd7ce5fbf4fc3aa0fd039d7c878a0c8f.scope WatchSource:0}: task d6da507cb60a7c850d731b86523a2ec3cd7ce5fbf4fc3aa0fd039d7c878a0c8f not found: not found Sep 6 01:24:56.263168 kubelet[1885]: E0906 01:24:56.263107 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:56.519483 systemd[1]: run-containerd-runc-k8s.io-9c5a91e9388130a0ceccd09ffb743fc1a908400d9316a9a276773661837186da-runc.JxjynN.mount: Deactivated successfully. Sep 6 01:24:56.883806 kubelet[1885]: I0906 01:24:56.883725 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-554xk" podStartSLOduration=8.883709605 podStartE2EDuration="8.883709605s" podCreationTimestamp="2025-09-06 01:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:24:53.513195984 +0000 UTC m=+75.170526727" watchObservedRunningTime="2025-09-06 01:24:56.883709605 +0000 UTC m=+78.541040348" Sep 6 01:24:57.193917 systemd-networkd[1626]: lxc_health: Gained IPv6LL Sep 6 01:24:57.263597 kubelet[1885]: E0906 01:24:57.263556 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:58.264934 kubelet[1885]: E0906 01:24:58.264893 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:58.720626 systemd[1]: run-containerd-runc-k8s.io-9c5a91e9388130a0ceccd09ffb743fc1a908400d9316a9a276773661837186da-runc.TVGibI.mount: Deactivated successfully. Sep 6 01:24:59.206470 kubelet[1885]: E0906 01:24:59.206416 1885 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:59.265635 kubelet[1885]: E0906 01:24:59.265573 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:24:59.324890 kubelet[1885]: W0906 01:24:59.324852 1885 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e8af50d_54ae_4551_9924_9bf658fae4cc.slice/cri-containerd-52a0e1b8c34b6a3d89a758a58da06dfe075f03350315a2afeeca66f329bfff7f.scope WatchSource:0}: task 52a0e1b8c34b6a3d89a758a58da06dfe075f03350315a2afeeca66f329bfff7f not found: not found Sep 6 01:25:00.265959 kubelet[1885]: E0906 01:25:00.265911 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:25:00.315875 env[1476]: time="2025-09-06T01:25:00.315827062Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 01:25:00.847448 systemd[1]: run-containerd-runc-k8s.io-9c5a91e9388130a0ceccd09ffb743fc1a908400d9316a9a276773661837186da-runc.GkUKlU.mount: Deactivated successfully. Sep 6 01:25:01.266877 kubelet[1885]: E0906 01:25:01.266504 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:25:01.916462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1200032602.mount: Deactivated successfully. Sep 6 01:25:02.267413 kubelet[1885]: E0906 01:25:02.267289 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:25:02.432558 kubelet[1885]: W0906 01:25:02.432440 1885 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e8af50d_54ae_4551_9924_9bf658fae4cc.slice/cri-containerd-0457c9806065970c5108bc5a4ba416a0624a87b358db97187e18947637259870.scope WatchSource:0}: task 0457c9806065970c5108bc5a4ba416a0624a87b358db97187e18947637259870 not found: not found Sep 6 01:25:02.628666 env[1476]: time="2025-09-06T01:25:02.628620322Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:25:02.637080 env[1476]: time="2025-09-06T01:25:02.637038261Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:25:02.641084 env[1476]: time="2025-09-06T01:25:02.641031749Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:25:02.641832 env[1476]: time="2025-09-06T01:25:02.641799118Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 6 01:25:02.644267 env[1476]: time="2025-09-06T01:25:02.644234026Z" level=info msg="CreateContainer within sandbox \"7a35e53df3b301d834b3c2e6c632ebdfd1e53bce3f413037d2b6fb458b505632\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 01:25:02.672849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2816544242.mount: Deactivated successfully. Sep 6 01:25:02.692645 env[1476]: time="2025-09-06T01:25:02.692587877Z" level=info msg="CreateContainer within sandbox \"7a35e53df3b301d834b3c2e6c632ebdfd1e53bce3f413037d2b6fb458b505632\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f30b31646634fdb723447619834b092bbcab95aaac9f53bbf665fc52cf111aad\"" Sep 6 01:25:02.693282 env[1476]: time="2025-09-06T01:25:02.693202484Z" level=info msg="StartContainer for \"f30b31646634fdb723447619834b092bbcab95aaac9f53bbf665fc52cf111aad\"" Sep 6 01:25:02.708599 systemd[1]: Started cri-containerd-f30b31646634fdb723447619834b092bbcab95aaac9f53bbf665fc52cf111aad.scope. Sep 6 01:25:02.742040 env[1476]: time="2025-09-06T01:25:02.741963220Z" level=info msg="StartContainer for \"f30b31646634fdb723447619834b092bbcab95aaac9f53bbf665fc52cf111aad\" returns successfully" Sep 6 01:25:02.907498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3670833734.mount: Deactivated successfully. Sep 6 01:25:02.962068 systemd[1]: run-containerd-runc-k8s.io-9c5a91e9388130a0ceccd09ffb743fc1a908400d9316a9a276773661837186da-runc.UUdsYQ.mount: Deactivated successfully. Sep 6 01:25:03.268053 kubelet[1885]: E0906 01:25:03.267668 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:25:03.518315 kubelet[1885]: I0906 01:25:03.518181 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dvnp9" podStartSLOduration=1.6304239919999999 podStartE2EDuration="17.51816047s" podCreationTimestamp="2025-09-06 01:24:46 +0000 UTC" firstStartedPulling="2025-09-06 01:24:46.75490409 +0000 UTC m=+68.412234833" lastFinishedPulling="2025-09-06 01:25:02.642640568 +0000 UTC m=+84.299971311" observedRunningTime="2025-09-06 01:25:03.517346541 +0000 UTC m=+85.174677284" watchObservedRunningTime="2025-09-06 01:25:03.51816047 +0000 UTC m=+85.175491213" Sep 6 01:25:04.268615 kubelet[1885]: E0906 01:25:04.268572 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:25:05.074611 systemd[1]: run-containerd-runc-k8s.io-9c5a91e9388130a0ceccd09ffb743fc1a908400d9316a9a276773661837186da-runc.WLXthl.mount: Deactivated successfully. Sep 6 01:25:05.269947 kubelet[1885]: E0906 01:25:05.269905 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:25:06.270453 kubelet[1885]: E0906 01:25:06.270411 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:25:07.271204 kubelet[1885]: E0906 01:25:07.271168 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:25:08.272319 kubelet[1885]: E0906 01:25:08.272268 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:25:09.272803 kubelet[1885]: E0906 01:25:09.272748 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 01:25:10.273167 kubelet[1885]: E0906 01:25:10.273110 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"