Jul 12 00:40:38.046592 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:40:38.046624 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Jul 11 23:15:18 -00 2025 Jul 12 00:40:38.046632 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 12 00:40:38.046640 kernel: printk: bootconsole [pl11] enabled Jul 12 00:40:38.046646 kernel: efi: EFI v2.70 by EDK II Jul 12 00:40:38.046652 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3761cf98 Jul 12 00:40:38.046663 kernel: random: crng init done Jul 12 00:40:38.046668 kernel: ACPI: Early table checksum verification disabled Jul 12 00:40:38.046674 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 12 00:40:38.046679 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:40:38.046685 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:40:38.046691 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 12 00:40:38.046701 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:40:38.046707 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:40:38.046715 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:40:38.046721 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:40:38.046731 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:40:38.046739 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:40:38.046745 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 12 00:40:38.046751 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:40:38.046760 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 12 00:40:38.046766 kernel: NUMA: Failed to initialise from firmware Jul 12 00:40:38.046773 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Jul 12 00:40:38.046778 kernel: NUMA: NODE_DATA [mem 0x1bf7f3900-0x1bf7f8fff] Jul 12 00:40:38.046789 kernel: Zone ranges: Jul 12 00:40:38.046794 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 12 00:40:38.046800 kernel: DMA32 empty Jul 12 00:40:38.046806 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 12 00:40:38.046813 kernel: Movable zone start for each node Jul 12 00:40:38.046819 kernel: Early memory node ranges Jul 12 00:40:38.046825 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 12 00:40:38.046831 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jul 12 00:40:38.046837 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 12 00:40:38.046842 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 12 00:40:38.046848 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 12 00:40:38.046854 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 12 00:40:38.046860 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 12 00:40:38.046867 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 12 00:40:38.046873 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 12 00:40:38.046879 kernel: psci: probing for conduit method from ACPI. Jul 12 00:40:38.046889 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:40:38.046895 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:40:38.046901 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 12 00:40:38.046907 kernel: psci: SMC Calling Convention v1.4 Jul 12 00:40:38.046913 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Jul 12 00:40:38.046922 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Jul 12 00:40:38.046928 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 12 00:40:38.046935 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 12 00:40:38.046941 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:40:38.046947 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:40:38.046954 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:40:38.046961 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:40:38.046967 kernel: CPU features: detected: Spectre-BHB Jul 12 00:40:38.046973 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:40:38.046981 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:40:38.046988 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:40:38.046997 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 12 00:40:38.047003 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:40:38.047010 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 12 00:40:38.047016 kernel: Policy zone: Normal Jul 12 00:40:38.047024 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=6cb548cec1e3020e9c3dcbc1d7670f4d8bdc2e3c8e062898ccaed7fc9d588f65 Jul 12 00:40:38.047032 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:40:38.047038 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:40:38.047045 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:40:38.047051 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:40:38.047058 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Jul 12 00:40:38.047064 kernel: Memory: 3986880K/4194160K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 207280K reserved, 0K cma-reserved) Jul 12 00:40:38.047072 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:40:38.047078 kernel: trace event string verifier disabled Jul 12 00:40:38.047085 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:40:38.047092 kernel: rcu: RCU event tracing is enabled. Jul 12 00:40:38.047098 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:40:38.047104 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:40:38.047111 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:40:38.047117 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:40:38.047124 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:40:38.047130 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:40:38.047136 kernel: GICv3: 960 SPIs implemented Jul 12 00:40:38.047143 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:40:38.047149 kernel: GICv3: Distributor has no Range Selector support Jul 12 00:40:38.047155 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:40:38.047161 kernel: GICv3: 16 PPIs implemented Jul 12 00:40:38.047167 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 12 00:40:38.047173 kernel: ITS: No ITS available, not enabling LPIs Jul 12 00:40:38.047180 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:40:38.047186 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:40:38.047192 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:40:38.047199 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:40:38.047205 kernel: Console: colour dummy device 80x25 Jul 12 00:40:38.047213 kernel: printk: console [tty1] enabled Jul 12 00:40:38.047219 kernel: ACPI: Core revision 20210730 Jul 12 00:40:38.047226 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:40:38.047232 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:40:38.047238 kernel: LSM: Security Framework initializing Jul 12 00:40:38.047244 kernel: SELinux: Initializing. Jul 12 00:40:38.047251 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:40:38.047257 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:40:38.047264 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 12 00:40:38.047271 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jul 12 00:40:38.047278 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:40:38.047284 kernel: Remapping and enabling EFI services. Jul 12 00:40:38.047290 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:40:38.047296 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:40:38.047303 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 12 00:40:38.047309 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:40:38.047315 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:40:38.047322 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:40:38.047328 kernel: SMP: Total of 2 processors activated. Jul 12 00:40:38.047336 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:40:38.047342 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 12 00:40:38.047349 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:40:38.047355 kernel: CPU features: detected: CRC32 instructions Jul 12 00:40:38.047362 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:40:38.047368 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:40:38.047375 kernel: CPU features: detected: Privileged Access Never Jul 12 00:40:38.047381 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:40:38.047387 kernel: alternatives: patching kernel code Jul 12 00:40:38.047395 kernel: devtmpfs: initialized Jul 12 00:40:38.047407 kernel: KASLR enabled Jul 12 00:40:38.047414 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:40:38.047422 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:40:38.047429 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:40:38.047435 kernel: SMBIOS 3.1.0 present. Jul 12 00:40:38.047442 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 12 00:40:38.047449 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:40:38.047456 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:40:38.047465 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:40:38.047472 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:40:38.047479 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:40:38.047485 kernel: audit: type=2000 audit(0.089:1): state=initialized audit_enabled=0 res=1 Jul 12 00:40:38.047509 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:40:38.047516 kernel: cpuidle: using governor menu Jul 12 00:40:38.047523 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:40:38.047532 kernel: ASID allocator initialised with 32768 entries Jul 12 00:40:38.047539 kernel: ACPI: bus type PCI registered Jul 12 00:40:38.047545 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:40:38.047552 kernel: Serial: AMBA PL011 UART driver Jul 12 00:40:38.047559 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:40:38.047566 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:40:38.047573 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:40:38.047580 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:40:38.047587 kernel: cryptd: max_cpu_qlen set to 1000 Jul 12 00:40:38.047595 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:40:38.047602 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:40:38.047609 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:40:38.047615 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:40:38.047622 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 12 00:40:38.047629 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 12 00:40:38.047636 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 12 00:40:38.047642 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:40:38.047649 kernel: ACPI: Interpreter enabled Jul 12 00:40:38.047657 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:40:38.047664 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:40:38.047671 kernel: printk: console [ttyAMA0] enabled Jul 12 00:40:38.047677 kernel: printk: bootconsole [pl11] disabled Jul 12 00:40:38.047684 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 12 00:40:38.047691 kernel: iommu: Default domain type: Translated Jul 12 00:40:38.047698 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:40:38.047704 kernel: vgaarb: loaded Jul 12 00:40:38.047711 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 12 00:40:38.047718 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 12 00:40:38.047726 kernel: PTP clock support registered Jul 12 00:40:38.047733 kernel: Registered efivars operations Jul 12 00:40:38.047740 kernel: No ACPI PMU IRQ for CPU0 Jul 12 00:40:38.047746 kernel: No ACPI PMU IRQ for CPU1 Jul 12 00:40:38.047753 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:40:38.047760 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:40:38.047767 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:40:38.047774 kernel: pnp: PnP ACPI init Jul 12 00:40:38.047780 kernel: pnp: PnP ACPI: found 0 devices Jul 12 00:40:38.047789 kernel: NET: Registered PF_INET protocol family Jul 12 00:40:38.047796 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:40:38.047804 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:40:38.047811 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:40:38.047818 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:40:38.047825 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 12 00:40:38.047832 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:40:38.047839 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:40:38.047847 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:40:38.047855 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:40:38.047861 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:40:38.047868 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 12 00:40:38.047875 kernel: kvm [1]: HYP mode not available Jul 12 00:40:38.047882 kernel: Initialise system trusted keyrings Jul 12 00:40:38.047889 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:40:38.047895 kernel: Key type asymmetric registered Jul 12 00:40:38.047902 kernel: Asymmetric key parser 'x509' registered Jul 12 00:40:38.047910 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 12 00:40:38.047916 kernel: io scheduler mq-deadline registered Jul 12 00:40:38.047923 kernel: io scheduler kyber registered Jul 12 00:40:38.047930 kernel: io scheduler bfq registered Jul 12 00:40:38.047937 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:40:38.047944 kernel: thunder_xcv, ver 1.0 Jul 12 00:40:38.047951 kernel: thunder_bgx, ver 1.0 Jul 12 00:40:38.047957 kernel: nicpf, ver 1.0 Jul 12 00:40:38.047964 kernel: nicvf, ver 1.0 Jul 12 00:40:38.048120 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:40:38.048185 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:40:37 UTC (1752280837) Jul 12 00:40:38.048194 kernel: efifb: probing for efifb Jul 12 00:40:38.048202 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 12 00:40:38.048209 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 12 00:40:38.048217 kernel: efifb: scrolling: redraw Jul 12 00:40:38.048224 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 12 00:40:38.048231 kernel: Console: switching to colour frame buffer device 128x48 Jul 12 00:40:38.048240 kernel: fb0: EFI VGA frame buffer device Jul 12 00:40:38.048247 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 12 00:40:38.048254 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:40:38.048261 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:40:38.048268 kernel: Segment Routing with IPv6 Jul 12 00:40:38.048274 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:40:38.048281 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:40:38.048287 kernel: Key type dns_resolver registered Jul 12 00:40:38.048294 kernel: registered taskstats version 1 Jul 12 00:40:38.048301 kernel: Loading compiled-in X.509 certificates Jul 12 00:40:38.048310 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: de2ee1d04443f96c763927c453375bbe23b5752a' Jul 12 00:40:38.048317 kernel: Key type .fscrypt registered Jul 12 00:40:38.048324 kernel: Key type fscrypt-provisioning registered Jul 12 00:40:38.048331 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:40:38.048338 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:40:38.048345 kernel: ima: No architecture policies found Jul 12 00:40:38.048352 kernel: clk: Disabling unused clocks Jul 12 00:40:38.048359 kernel: Freeing unused kernel memory: 36416K Jul 12 00:40:38.048367 kernel: Run /init as init process Jul 12 00:40:38.048374 kernel: with arguments: Jul 12 00:40:38.048381 kernel: /init Jul 12 00:40:38.048387 kernel: with environment: Jul 12 00:40:38.048394 kernel: HOME=/ Jul 12 00:40:38.048400 kernel: TERM=linux Jul 12 00:40:38.048408 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:40:38.048417 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 12 00:40:38.048429 systemd[1]: Detected virtualization microsoft. Jul 12 00:40:38.048437 systemd[1]: Detected architecture arm64. Jul 12 00:40:38.048444 systemd[1]: Running in initrd. Jul 12 00:40:38.048451 systemd[1]: No hostname configured, using default hostname. Jul 12 00:40:38.048459 systemd[1]: Hostname set to . Jul 12 00:40:38.048466 systemd[1]: Initializing machine ID from random generator. Jul 12 00:40:38.048473 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:40:38.048481 systemd[1]: Started systemd-ask-password-console.path. Jul 12 00:40:38.048503 systemd[1]: Reached target cryptsetup.target. Jul 12 00:40:38.048511 systemd[1]: Reached target paths.target. Jul 12 00:40:38.048518 systemd[1]: Reached target slices.target. Jul 12 00:40:38.048525 systemd[1]: Reached target swap.target. Jul 12 00:40:38.048532 systemd[1]: Reached target timers.target. Jul 12 00:40:38.048540 systemd[1]: Listening on iscsid.socket. Jul 12 00:40:38.048548 systemd[1]: Listening on iscsiuio.socket. Jul 12 00:40:38.048555 systemd[1]: Listening on systemd-journald-audit.socket. Jul 12 00:40:38.048564 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 12 00:40:38.048572 systemd[1]: Listening on systemd-journald.socket. Jul 12 00:40:38.048579 systemd[1]: Listening on systemd-networkd.socket. Jul 12 00:40:38.048586 systemd[1]: Listening on systemd-udevd-control.socket. Jul 12 00:40:38.048593 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 12 00:40:38.048601 systemd[1]: Reached target sockets.target. Jul 12 00:40:38.048608 systemd[1]: Starting kmod-static-nodes.service... Jul 12 00:40:38.048615 systemd[1]: Finished network-cleanup.service. Jul 12 00:40:38.048623 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:40:38.048632 systemd[1]: Starting systemd-journald.service... Jul 12 00:40:38.048639 systemd[1]: Starting systemd-modules-load.service... Jul 12 00:40:38.048646 systemd[1]: Starting systemd-resolved.service... Jul 12 00:40:38.048654 systemd[1]: Starting systemd-vconsole-setup.service... Jul 12 00:40:38.048667 systemd-journald[276]: Journal started Jul 12 00:40:38.048708 systemd-journald[276]: Runtime Journal (/run/log/journal/06ec960acd2b42b8b8f2d9a79fcf0277) is 8.0M, max 78.5M, 70.5M free. Jul 12 00:40:38.029888 systemd-modules-load[277]: Inserted module 'overlay' Jul 12 00:40:38.071007 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:40:38.066994 systemd-resolved[278]: Positive Trust Anchors: Jul 12 00:40:38.083770 systemd[1]: Started systemd-journald.service. Jul 12 00:40:38.067002 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:40:38.067032 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 12 00:40:38.154605 kernel: Bridge firewalling registered Jul 12 00:40:38.154630 kernel: audit: type=1130 audit(1752280838.130:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.154642 kernel: SCSI subsystem initialized Jul 12 00:40:38.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.069183 systemd-resolved[278]: Defaulting to hostname 'linux'. Jul 12 00:40:38.227607 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:40:38.227635 kernel: audit: type=1130 audit(1752280838.158:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.227648 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:40:38.227659 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 12 00:40:38.227670 kernel: audit: type=1130 audit(1752280838.205:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.096238 systemd-modules-load[277]: Inserted module 'br_netfilter' Jul 12 00:40:38.256587 kernel: audit: type=1130 audit(1752280838.232:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.131173 systemd[1]: Started systemd-resolved.service. Jul 12 00:40:38.185227 systemd[1]: Finished kmod-static-nodes.service. Jul 12 00:40:38.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.205951 systemd-modules-load[277]: Inserted module 'dm_multipath' Jul 12 00:40:38.315226 kernel: audit: type=1130 audit(1752280838.257:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.315253 kernel: audit: type=1130 audit(1752280838.283:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.207004 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:40:38.232929 systemd[1]: Finished systemd-modules-load.service. Jul 12 00:40:38.258073 systemd[1]: Finished systemd-vconsole-setup.service. Jul 12 00:40:38.284162 systemd[1]: Reached target nss-lookup.target. Jul 12 00:40:38.374579 kernel: audit: type=1130 audit(1752280838.349:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.314845 systemd[1]: Starting dracut-cmdline-ask.service... Jul 12 00:40:38.402597 kernel: audit: type=1130 audit(1752280838.378:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.320105 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:40:38.324825 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 12 00:40:38.433731 kernel: audit: type=1130 audit(1752280838.403:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.336100 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 12 00:40:38.370781 systemd[1]: Finished dracut-cmdline-ask.service. Jul 12 00:40:38.443345 dracut-cmdline[298]: dracut-dracut-053 Jul 12 00:40:38.379467 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:40:38.451307 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=6cb548cec1e3020e9c3dcbc1d7670f4d8bdc2e3c8e062898ccaed7fc9d588f65 Jul 12 00:40:38.405034 systemd[1]: Starting dracut-cmdline.service... Jul 12 00:40:38.506527 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:40:38.521514 kernel: iscsi: registered transport (tcp) Jul 12 00:40:38.542963 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:40:38.543030 kernel: QLogic iSCSI HBA Driver Jul 12 00:40:38.579548 systemd[1]: Finished dracut-cmdline.service. Jul 12 00:40:38.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:38.585572 systemd[1]: Starting dracut-pre-udev.service... Jul 12 00:40:38.639526 kernel: raid6: neonx8 gen() 13729 MB/s Jul 12 00:40:38.659529 kernel: raid6: neonx8 xor() 10806 MB/s Jul 12 00:40:38.679526 kernel: raid6: neonx4 gen() 13520 MB/s Jul 12 00:40:38.700531 kernel: raid6: neonx4 xor() 10877 MB/s Jul 12 00:40:38.720502 kernel: raid6: neonx2 gen() 12955 MB/s Jul 12 00:40:38.740502 kernel: raid6: neonx2 xor() 10400 MB/s Jul 12 00:40:38.761504 kernel: raid6: neonx1 gen() 10571 MB/s Jul 12 00:40:38.781501 kernel: raid6: neonx1 xor() 8819 MB/s Jul 12 00:40:38.801506 kernel: raid6: int64x8 gen() 6275 MB/s Jul 12 00:40:38.823503 kernel: raid6: int64x8 xor() 3542 MB/s Jul 12 00:40:38.843501 kernel: raid6: int64x4 gen() 7231 MB/s Jul 12 00:40:38.863503 kernel: raid6: int64x4 xor() 3858 MB/s Jul 12 00:40:38.885533 kernel: raid6: int64x2 gen() 6152 MB/s Jul 12 00:40:38.906506 kernel: raid6: int64x2 xor() 3320 MB/s Jul 12 00:40:38.939506 kernel: raid6: int64x1 gen() 5043 MB/s Jul 12 00:40:38.964044 kernel: raid6: int64x1 xor() 2647 MB/s Jul 12 00:40:38.964067 kernel: raid6: using algorithm neonx8 gen() 13729 MB/s Jul 12 00:40:38.964076 kernel: raid6: .... xor() 10806 MB/s, rmw enabled Jul 12 00:40:38.968328 kernel: raid6: using neon recovery algorithm Jul 12 00:40:38.996041 kernel: xor: measuring software checksum speed Jul 12 00:40:38.996078 kernel: 8regs : 17231 MB/sec Jul 12 00:40:39.000052 kernel: 32regs : 20634 MB/sec Jul 12 00:40:39.004066 kernel: arm64_neon : 27757 MB/sec Jul 12 00:40:39.008622 kernel: xor: using function: arm64_neon (27757 MB/sec) Jul 12 00:40:39.073516 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 12 00:40:39.083687 systemd[1]: Finished dracut-pre-udev.service. Jul 12 00:40:39.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:39.091000 audit: BPF prog-id=7 op=LOAD Jul 12 00:40:39.092000 audit: BPF prog-id=8 op=LOAD Jul 12 00:40:39.093169 systemd[1]: Starting systemd-udevd.service... Jul 12 00:40:39.112034 systemd-udevd[474]: Using default interface naming scheme 'v252'. Jul 12 00:40:39.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:39.119599 systemd[1]: Started systemd-udevd.service. Jul 12 00:40:39.131366 systemd[1]: Starting dracut-pre-trigger.service... Jul 12 00:40:39.148258 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation Jul 12 00:40:39.176121 systemd[1]: Finished dracut-pre-trigger.service. Jul 12 00:40:39.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:39.182121 systemd[1]: Starting systemd-udev-trigger.service... Jul 12 00:40:39.218883 systemd[1]: Finished systemd-udev-trigger.service. Jul 12 00:40:39.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:39.281514 kernel: hv_vmbus: Vmbus version:5.3 Jul 12 00:40:39.304517 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 12 00:40:39.304567 kernel: hv_vmbus: registering driver hid_hyperv Jul 12 00:40:39.304577 kernel: hv_vmbus: registering driver hv_netvsc Jul 12 00:40:39.305509 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 12 00:40:39.327275 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 12 00:40:39.327467 kernel: hv_vmbus: registering driver hv_storvsc Jul 12 00:40:39.327487 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 12 00:40:39.348851 kernel: scsi host1: storvsc_host_t Jul 12 00:40:39.349062 kernel: scsi host0: storvsc_host_t Jul 12 00:40:39.356510 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 12 00:40:39.364515 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 12 00:40:39.384829 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 12 00:40:39.408640 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 12 00:40:39.408657 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 12 00:40:39.419485 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 12 00:40:39.419613 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 12 00:40:39.419694 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 12 00:40:39.419778 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 12 00:40:39.419854 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 12 00:40:39.419932 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:40:39.419950 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 12 00:40:39.451517 kernel: hv_netvsc 000d3af7-8473-000d-3af7-8473000d3af7 eth0: VF slot 1 added Jul 12 00:40:39.459534 kernel: hv_vmbus: registering driver hv_pci Jul 12 00:40:39.473094 kernel: hv_pci 18d22fe6-4ae1-40be-b48a-e142d911a7ec: PCI VMBus probing: Using version 0x10004 Jul 12 00:40:39.583070 kernel: hv_pci 18d22fe6-4ae1-40be-b48a-e142d911a7ec: PCI host bridge to bus 4ae1:00 Jul 12 00:40:39.583176 kernel: pci_bus 4ae1:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 12 00:40:39.583272 kernel: pci_bus 4ae1:00: No busn resource found for root bus, will use [bus 00-ff] Jul 12 00:40:39.583364 kernel: pci 4ae1:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 12 00:40:39.583464 kernel: pci 4ae1:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 12 00:40:39.583564 kernel: pci 4ae1:00:02.0: enabling Extended Tags Jul 12 00:40:39.583647 kernel: pci 4ae1:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4ae1:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 12 00:40:39.583726 kernel: pci_bus 4ae1:00: busn_res: [bus 00-ff] end is updated to 00 Jul 12 00:40:39.583799 kernel: pci 4ae1:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 12 00:40:39.622522 kernel: mlx5_core 4ae1:00:02.0: firmware version: 16.30.1284 Jul 12 00:40:39.854127 kernel: mlx5_core 4ae1:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Jul 12 00:40:39.854250 kernel: hv_netvsc 000d3af7-8473-000d-3af7-8473000d3af7 eth0: VF registering: eth1 Jul 12 00:40:39.854340 kernel: mlx5_core 4ae1:00:02.0 eth1: joined to eth0 Jul 12 00:40:39.864539 kernel: mlx5_core 4ae1:00:02.0 enP19169s1: renamed from eth1 Jul 12 00:40:39.991524 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (541) Jul 12 00:40:40.004599 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 12 00:40:40.054395 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 12 00:40:40.153434 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 12 00:40:40.258282 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 12 00:40:40.264651 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 12 00:40:40.278898 systemd[1]: Starting disk-uuid.service... Jul 12 00:40:40.300522 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:40:40.320522 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:40:41.318516 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:40:41.319772 disk-uuid[604]: The operation has completed successfully. Jul 12 00:40:41.389468 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:40:41.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:41.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:41.389583 systemd[1]: Finished disk-uuid.service. Jul 12 00:40:41.395141 systemd[1]: Starting verity-setup.service... Jul 12 00:40:41.469037 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:40:41.834701 systemd[1]: Found device dev-mapper-usr.device. Jul 12 00:40:41.841288 systemd[1]: Mounting sysusr-usr.mount... Jul 12 00:40:41.852305 systemd[1]: Finished verity-setup.service. Jul 12 00:40:41.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:41.910519 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 12 00:40:41.911107 systemd[1]: Mounted sysusr-usr.mount. Jul 12 00:40:41.915385 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 12 00:40:41.916261 systemd[1]: Starting ignition-setup.service... Jul 12 00:40:41.924565 systemd[1]: Starting parse-ip-for-networkd.service... Jul 12 00:40:41.962652 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:40:41.962710 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:40:41.967321 kernel: BTRFS info (device sda6): has skinny extents Jul 12 00:40:42.013756 systemd[1]: Finished parse-ip-for-networkd.service. Jul 12 00:40:42.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:42.044105 kernel: kauditd_printk_skb: 10 callbacks suppressed Jul 12 00:40:42.044166 kernel: audit: type=1130 audit(1752280842.018:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:42.043000 audit: BPF prog-id=9 op=LOAD Jul 12 00:40:42.049564 kernel: audit: type=1334 audit(1752280842.043:22): prog-id=9 op=LOAD Jul 12 00:40:42.049793 systemd[1]: Starting systemd-networkd.service... Jul 12 00:40:42.076341 systemd-networkd[868]: lo: Link UP Jul 12 00:40:42.078533 systemd-networkd[868]: lo: Gained carrier Jul 12 00:40:42.079076 systemd-networkd[868]: Enumeration completed Jul 12 00:40:42.110448 kernel: audit: type=1130 audit(1752280842.087:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:42.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:42.080124 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:40:42.083824 systemd[1]: Started systemd-networkd.service. Jul 12 00:40:42.088336 systemd[1]: Reached target network.target. Jul 12 00:40:42.117465 systemd[1]: Starting iscsiuio.service... Jul 12 00:40:42.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:42.125750 systemd[1]: Started iscsiuio.service. Jul 12 00:40:42.170193 kernel: audit: type=1130 audit(1752280842.133:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:42.170219 iscsid[879]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 12 00:40:42.170219 iscsid[879]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 12 00:40:42.170219 iscsid[879]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 12 00:40:42.170219 iscsid[879]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 12 00:40:42.170219 iscsid[879]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 12 00:40:42.170219 iscsid[879]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 12 00:40:42.170219 iscsid[879]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 12 00:40:42.294714 kernel: audit: type=1130 audit(1752280842.173:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:42.294740 kernel: audit: type=1130 audit(1752280842.221:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:42.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:42.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:42.155652 systemd[1]: Starting iscsid.service... Jul 12 00:40:42.161410 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:40:42.165850 systemd[1]: Started iscsid.service. Jul 12 00:40:42.178090 systemd[1]: Starting dracut-initqueue.service... Jul 12 00:40:42.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:42.199715 systemd[1]: Finished dracut-initqueue.service. Jul 12 00:40:42.336854 kernel: audit: type=1130 audit(1752280842.312:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:42.222401 systemd[1]: Reached target remote-fs-pre.target. Jul 12 00:40:42.254953 systemd[1]: Reached target remote-cryptsetup.target. Jul 12 00:40:42.264952 systemd[1]: Reached target remote-fs.target. Jul 12 00:40:42.276045 systemd[1]: Starting dracut-pre-mount.service... Jul 12 00:40:42.304660 systemd[1]: Finished dracut-pre-mount.service. Jul 12 00:40:42.369510 kernel: mlx5_core 4ae1:00:02.0 enP19169s1: Link up Jul 12 00:40:42.416426 kernel: hv_netvsc 000d3af7-8473-000d-3af7-8473000d3af7 eth0: Data path switched to VF: enP19169s1 Jul 12 00:40:42.417340 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:40:42.416740 systemd-networkd[868]: enP19169s1: Link UP Jul 12 00:40:42.416818 systemd-networkd[868]: eth0: Link UP Jul 12 00:40:42.416972 systemd-networkd[868]: eth0: Gained carrier Jul 12 00:40:42.429145 systemd-networkd[868]: enP19169s1: Gained carrier Jul 12 00:40:42.438595 systemd-networkd[868]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 12 00:40:42.598935 systemd[1]: Finished ignition-setup.service. Jul 12 00:40:42.625603 kernel: audit: type=1130 audit(1752280842.603:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:42.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:42.604619 systemd[1]: Starting ignition-fetch-offline.service... Jul 12 00:40:44.345613 systemd-networkd[868]: eth0: Gained IPv6LL Jul 12 00:40:47.888893 ignition[896]: Ignition 2.14.0 Jul 12 00:40:47.892522 ignition[896]: Stage: fetch-offline Jul 12 00:40:47.892622 ignition[896]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:40:47.892652 ignition[896]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 12 00:40:47.959630 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:40:47.959813 ignition[896]: parsed url from cmdline: "" Jul 12 00:40:47.959817 ignition[896]: no config URL provided Jul 12 00:40:47.959822 ignition[896]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:40:47.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:47.966546 systemd[1]: Finished ignition-fetch-offline.service. Jul 12 00:40:48.008012 kernel: audit: type=1130 audit(1752280847.975:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:47.959831 ignition[896]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:40:47.976935 systemd[1]: Starting ignition-fetch.service... Jul 12 00:40:47.959837 ignition[896]: failed to fetch config: resource requires networking Jul 12 00:40:47.960086 ignition[896]: Ignition finished successfully Jul 12 00:40:47.997883 ignition[902]: Ignition 2.14.0 Jul 12 00:40:47.997890 ignition[902]: Stage: fetch Jul 12 00:40:47.998007 ignition[902]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:40:47.998025 ignition[902]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 12 00:40:48.005441 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:40:48.005576 ignition[902]: parsed url from cmdline: "" Jul 12 00:40:48.005579 ignition[902]: no config URL provided Jul 12 00:40:48.005584 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:40:48.005591 ignition[902]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:40:48.005619 ignition[902]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 12 00:40:48.146314 ignition[902]: GET result: OK Jul 12 00:40:48.146394 ignition[902]: config has been read from IMDS userdata Jul 12 00:40:48.149940 unknown[902]: fetched base config from "system" Jul 12 00:40:48.180604 kernel: audit: type=1130 audit(1752280848.158:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:48.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:48.146441 ignition[902]: parsing config with SHA512: b421a156d66c2c6acb71da076aaefcc80842f879095545dfc2dd300c7e1e56948525e130df420780e15ca3beb02db7626a6543268b7b003773741ab4fe3b7617 Jul 12 00:40:48.149947 unknown[902]: fetched base config from "system" Jul 12 00:40:48.150529 ignition[902]: fetch: fetch complete Jul 12 00:40:48.149952 unknown[902]: fetched user config from "azure" Jul 12 00:40:48.150535 ignition[902]: fetch: fetch passed Jul 12 00:40:48.151767 systemd[1]: Finished ignition-fetch.service. Jul 12 00:40:48.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:48.150579 ignition[902]: Ignition finished successfully Jul 12 00:40:48.228292 kernel: audit: type=1130 audit(1752280848.206:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:48.159913 systemd[1]: Starting ignition-kargs.service... Jul 12 00:40:48.190151 ignition[908]: Ignition 2.14.0 Jul 12 00:40:48.202562 systemd[1]: Finished ignition-kargs.service. Jul 12 00:40:48.190158 ignition[908]: Stage: kargs Jul 12 00:40:48.228244 systemd[1]: Starting ignition-disks.service... Jul 12 00:40:48.190281 ignition[908]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:40:48.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:48.247616 systemd[1]: Finished ignition-disks.service. Jul 12 00:40:48.190300 ignition[908]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 12 00:40:48.278438 systemd[1]: Reached target initrd-root-device.target. Jul 12 00:40:48.193136 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:40:48.301466 kernel: audit: type=1130 audit(1752280848.255:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:48.287138 systemd[1]: Reached target local-fs-pre.target. Jul 12 00:40:48.198147 ignition[908]: kargs: kargs passed Jul 12 00:40:48.296003 systemd[1]: Reached target local-fs.target. Jul 12 00:40:48.198207 ignition[908]: Ignition finished successfully Jul 12 00:40:48.305968 systemd[1]: Reached target sysinit.target. Jul 12 00:40:48.239470 ignition[914]: Ignition 2.14.0 Jul 12 00:40:48.314087 systemd[1]: Reached target basic.target. Jul 12 00:40:48.239476 ignition[914]: Stage: disks Jul 12 00:40:48.322248 systemd[1]: Starting systemd-fsck-root.service... Jul 12 00:40:48.239615 ignition[914]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:40:48.239638 ignition[914]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 12 00:40:48.242404 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:40:48.245231 ignition[914]: disks: disks passed Jul 12 00:40:48.245298 ignition[914]: Ignition finished successfully Jul 12 00:40:48.457108 systemd-fsck[923]: ROOT: clean, 619/7326000 files, 481078/7359488 blocks Jul 12 00:40:48.465956 systemd[1]: Finished systemd-fsck-root.service. Jul 12 00:40:48.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:48.495144 systemd[1]: Mounting sysroot.mount... Jul 12 00:40:48.504611 kernel: audit: type=1130 audit(1752280848.470:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:48.517514 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 12 00:40:48.518289 systemd[1]: Mounted sysroot.mount. Jul 12 00:40:48.522404 systemd[1]: Reached target initrd-root-fs.target. Jul 12 00:40:48.557228 systemd[1]: Mounting sysroot-usr.mount... Jul 12 00:40:48.562048 systemd[1]: Starting flatcar-metadata-hostname.service... Jul 12 00:40:48.569330 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:40:48.569365 systemd[1]: Reached target ignition-diskful.target. Jul 12 00:40:48.575212 systemd[1]: Mounted sysroot-usr.mount. Jul 12 00:40:48.665640 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 12 00:40:48.671544 systemd[1]: Starting initrd-setup-root.service... Jul 12 00:40:48.694526 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (934) Jul 12 00:40:48.706225 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:40:48.706273 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:40:48.706283 initrd-setup-root[939]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:40:48.717761 kernel: BTRFS info (device sda6): has skinny extents Jul 12 00:40:48.722986 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 12 00:40:48.752384 initrd-setup-root[965]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:40:48.775116 initrd-setup-root[973]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:40:48.784963 initrd-setup-root[981]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:40:49.549967 systemd[1]: Finished initrd-setup-root.service. Jul 12 00:40:49.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:49.574472 systemd[1]: Starting ignition-mount.service... Jul 12 00:40:49.592021 kernel: audit: type=1130 audit(1752280849.554:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:49.583436 systemd[1]: Starting sysroot-boot.service... Jul 12 00:40:49.597484 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 12 00:40:49.597599 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 12 00:40:49.622411 ignition[1002]: INFO : Ignition 2.14.0 Jul 12 00:40:49.622411 ignition[1002]: INFO : Stage: mount Jul 12 00:40:49.622411 ignition[1002]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:40:49.622411 ignition[1002]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 12 00:40:49.687963 kernel: audit: type=1130 audit(1752280849.625:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:49.687988 kernel: audit: type=1130 audit(1752280849.648:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:49.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:49.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:49.618123 systemd[1]: Finished sysroot-boot.service. Jul 12 00:40:49.692168 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:40:49.692168 ignition[1002]: INFO : mount: mount passed Jul 12 00:40:49.692168 ignition[1002]: INFO : Ignition finished successfully Jul 12 00:40:49.627815 systemd[1]: Finished ignition-mount.service. Jul 12 00:40:50.673412 coreos-metadata[933]: Jul 12 00:40:50.673 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 12 00:40:50.681801 coreos-metadata[933]: Jul 12 00:40:50.681 INFO Fetch successful Jul 12 00:40:50.714340 coreos-metadata[933]: Jul 12 00:40:50.714 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 12 00:40:50.727159 coreos-metadata[933]: Jul 12 00:40:50.727 INFO Fetch successful Jul 12 00:40:50.740816 coreos-metadata[933]: Jul 12 00:40:50.740 INFO wrote hostname ci-3510.3.7-n-2c4241d00d to /sysroot/etc/hostname Jul 12 00:40:50.750571 systemd[1]: Finished flatcar-metadata-hostname.service. Jul 12 00:40:50.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:50.756770 systemd[1]: Starting ignition-files.service... Jul 12 00:40:50.788055 kernel: audit: type=1130 audit(1752280850.755:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:50.787197 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 12 00:40:50.811524 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1012) Jul 12 00:40:50.828985 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:40:50.829043 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:40:50.829053 kernel: BTRFS info (device sda6): has skinny extents Jul 12 00:40:50.838638 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 12 00:40:50.861663 ignition[1031]: INFO : Ignition 2.14.0 Jul 12 00:40:50.861663 ignition[1031]: INFO : Stage: files Jul 12 00:40:50.871619 ignition[1031]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:40:50.871619 ignition[1031]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 12 00:40:50.871619 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:40:50.871619 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:40:50.871619 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:40:50.871619 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:40:50.995177 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:40:51.003624 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:40:51.003624 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:40:51.003199 unknown[1031]: wrote ssh authorized keys file for user: core Jul 12 00:40:51.023993 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:40:51.023993 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 12 00:40:51.079487 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:40:51.271726 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:40:51.282589 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:40:51.282589 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 12 00:40:51.481867 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 12 00:40:51.574972 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:40:51.574972 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:40:51.603739 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:40:51.603739 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:40:51.603739 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:40:51.603739 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:40:51.603739 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:40:51.603739 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:40:51.603739 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:40:51.603739 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:40:51.603739 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:40:51.603739 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:40:51.603739 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:40:51.603739 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Jul 12 00:40:51.603739 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:40:51.752297 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem658249301" Jul 12 00:40:51.752297 ignition[1031]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem658249301": device or resource busy Jul 12 00:40:51.752297 ignition[1031]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem658249301", trying btrfs: device or resource busy Jul 12 00:40:51.752297 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem658249301" Jul 12 00:40:51.752297 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem658249301" Jul 12 00:40:51.752297 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem658249301" Jul 12 00:40:51.752297 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem658249301" Jul 12 00:40:51.752297 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Jul 12 00:40:51.752297 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 12 00:40:51.752297 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Jul 12 00:40:51.752297 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3345507075" Jul 12 00:40:51.752297 ignition[1031]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3345507075": device or resource busy Jul 12 00:40:51.752297 ignition[1031]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3345507075", trying btrfs: device or resource busy Jul 12 00:40:51.752297 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3345507075" Jul 12 00:40:51.625920 systemd[1]: mnt-oem658249301.mount: Deactivated successfully. Jul 12 00:40:51.915904 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3345507075" Jul 12 00:40:51.915904 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem3345507075" Jul 12 00:40:51.915904 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem3345507075" Jul 12 00:40:51.915904 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 12 00:40:51.915904 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:40:51.915904 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 12 00:40:51.651553 systemd[1]: mnt-oem3345507075.mount: Deactivated successfully. Jul 12 00:40:52.062449 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Jul 12 00:40:52.327680 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:40:52.327680 ignition[1031]: INFO : files: op(14): [started] processing unit "waagent.service" Jul 12 00:40:52.327680 ignition[1031]: INFO : files: op(14): [finished] processing unit "waagent.service" Jul 12 00:40:52.327680 ignition[1031]: INFO : files: op(15): [started] processing unit "nvidia.service" Jul 12 00:40:52.327680 ignition[1031]: INFO : files: op(15): [finished] processing unit "nvidia.service" Jul 12 00:40:52.327680 ignition[1031]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Jul 12 00:40:52.414131 kernel: audit: type=1130 audit(1752280852.344:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.340610 systemd[1]: Finished ignition-files.service. Jul 12 00:40:52.423067 ignition[1031]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:40:52.423067 ignition[1031]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:40:52.423067 ignition[1031]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Jul 12 00:40:52.423067 ignition[1031]: INFO : files: op(18): [started] setting preset to enabled for "waagent.service" Jul 12 00:40:52.423067 ignition[1031]: INFO : files: op(18): [finished] setting preset to enabled for "waagent.service" Jul 12 00:40:52.423067 ignition[1031]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Jul 12 00:40:52.423067 ignition[1031]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Jul 12 00:40:52.423067 ignition[1031]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:40:52.423067 ignition[1031]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:40:52.423067 ignition[1031]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:40:52.423067 ignition[1031]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:40:52.423067 ignition[1031]: INFO : files: files passed Jul 12 00:40:52.423067 ignition[1031]: INFO : Ignition finished successfully Jul 12 00:40:52.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.348329 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 12 00:40:52.371126 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 12 00:40:52.584468 initrd-setup-root-after-ignition[1056]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:40:52.372160 systemd[1]: Starting ignition-quench.service... Jul 12 00:40:52.390447 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:40:52.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.390578 systemd[1]: Finished ignition-quench.service. Jul 12 00:40:52.395607 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 12 00:40:52.407437 systemd[1]: Reached target ignition-complete.target. Jul 12 00:40:52.419562 systemd[1]: Starting initrd-parse-etc.service... Jul 12 00:40:52.448064 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:40:52.448186 systemd[1]: Finished initrd-parse-etc.service. Jul 12 00:40:52.459298 systemd[1]: Reached target initrd-fs.target. Jul 12 00:40:52.471005 systemd[1]: Reached target initrd.target. Jul 12 00:40:52.482345 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 12 00:40:52.483213 systemd[1]: Starting dracut-pre-pivot.service... Jul 12 00:40:52.532337 systemd[1]: Finished dracut-pre-pivot.service. Jul 12 00:40:52.547376 systemd[1]: Starting initrd-cleanup.service... Jul 12 00:40:52.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.566083 systemd[1]: Stopped target nss-lookup.target. Jul 12 00:40:52.574007 systemd[1]: Stopped target remote-cryptsetup.target. Jul 12 00:40:52.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.589322 systemd[1]: Stopped target timers.target. Jul 12 00:40:52.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.601652 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:40:52.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.601781 systemd[1]: Stopped dracut-pre-pivot.service. Jul 12 00:40:52.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.610111 systemd[1]: Stopped target initrd.target. Jul 12 00:40:52.618924 systemd[1]: Stopped target basic.target. Jul 12 00:40:52.626768 systemd[1]: Stopped target ignition-complete.target. Jul 12 00:40:52.790404 ignition[1069]: INFO : Ignition 2.14.0 Jul 12 00:40:52.790404 ignition[1069]: INFO : Stage: umount Jul 12 00:40:52.790404 ignition[1069]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 12 00:40:52.790404 ignition[1069]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 12 00:40:52.790404 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:40:52.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.636540 systemd[1]: Stopped target ignition-diskful.target. Jul 12 00:40:52.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.865127 ignition[1069]: INFO : umount: umount passed Jul 12 00:40:52.865127 ignition[1069]: INFO : Ignition finished successfully Jul 12 00:40:52.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.645563 systemd[1]: Stopped target initrd-root-device.target. Jul 12 00:40:52.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.654652 systemd[1]: Stopped target remote-fs.target. Jul 12 00:40:52.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.663872 systemd[1]: Stopped target remote-fs-pre.target. Jul 12 00:40:52.672470 systemd[1]: Stopped target sysinit.target. Jul 12 00:40:52.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.680895 systemd[1]: Stopped target local-fs.target. Jul 12 00:40:52.688681 systemd[1]: Stopped target local-fs-pre.target. Jul 12 00:40:52.700038 systemd[1]: Stopped target swap.target. Jul 12 00:40:52.708137 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:40:52.708256 systemd[1]: Stopped dracut-pre-mount.service. Jul 12 00:40:52.717965 systemd[1]: Stopped target cryptsetup.target. Jul 12 00:40:52.726323 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:40:52.726431 systemd[1]: Stopped dracut-initqueue.service. Jul 12 00:40:53.011614 kernel: kauditd_printk_skb: 20 callbacks suppressed Jul 12 00:40:53.011649 kernel: audit: type=1131 audit(1752280852.980:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.734811 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:40:52.734966 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 12 00:40:52.744917 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:40:53.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.745019 systemd[1]: Stopped ignition-files.service. Jul 12 00:40:53.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.752825 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 12 00:40:53.107130 kernel: audit: type=1131 audit(1752280853.031:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:53.107162 kernel: audit: type=1130 audit(1752280853.057:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:53.107172 kernel: audit: type=1131 audit(1752280853.057:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:53.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.752925 systemd[1]: Stopped flatcar-metadata-hostname.service. Jul 12 00:40:53.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.763851 systemd[1]: Stopping ignition-mount.service... Jul 12 00:40:53.146811 kernel: audit: type=1131 audit(1752280853.082:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:53.146837 kernel: audit: type=1334 audit(1752280853.088:64): prog-id=6 op=UNLOAD Jul 12 00:40:53.088000 audit: BPF prog-id=6 op=UNLOAD Jul 12 00:40:52.776920 systemd[1]: Stopping iscsiuio.service... Jul 12 00:40:53.172147 kernel: audit: type=1131 audit(1752280853.151:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:53.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.800053 systemd[1]: Stopping sysroot-boot.service... Jul 12 00:40:53.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.806330 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:40:53.204703 kernel: audit: type=1131 audit(1752280853.178:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.808571 systemd[1]: Stopped systemd-udev-trigger.service. Jul 12 00:40:53.229155 kernel: audit: type=1131 audit(1752280853.209:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:53.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.836237 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:40:52.836443 systemd[1]: Stopped dracut-pre-trigger.service. Jul 12 00:40:52.852958 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:40:52.854042 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 12 00:40:53.284615 kernel: audit: type=1131 audit(1752280853.257:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:53.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.854179 systemd[1]: Stopped iscsiuio.service. Jul 12 00:40:52.860106 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:40:52.860238 systemd[1]: Stopped ignition-mount.service. Jul 12 00:40:53.326608 kernel: hv_netvsc 000d3af7-8473-000d-3af7-8473000d3af7 eth0: Data path switched from VF: enP19169s1 Jul 12 00:40:53.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:53.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.871520 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:40:53.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.871599 systemd[1]: Stopped ignition-disks.service. Jul 12 00:40:52.880254 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:40:52.880308 systemd[1]: Stopped ignition-kargs.service. Jul 12 00:40:52.890481 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:40:53.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.890539 systemd[1]: Stopped ignition-fetch.service. Jul 12 00:40:53.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.900108 systemd[1]: Stopped target network.target. Jul 12 00:40:53.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.909055 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:40:53.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.909113 systemd[1]: Stopped ignition-fetch-offline.service. Jul 12 00:40:53.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:53.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.919077 systemd[1]: Stopped target paths.target. Jul 12 00:40:53.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.927157 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:40:52.930514 systemd[1]: Stopped systemd-ask-password-console.path. Jul 12 00:40:53.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:40:52.940112 systemd[1]: Stopped target slices.target. Jul 12 00:40:52.947495 systemd[1]: Stopped target sockets.target. Jul 12 00:40:52.956474 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:40:52.956517 systemd[1]: Closed iscsid.socket. Jul 12 00:40:52.963822 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:40:52.963863 systemd[1]: Closed iscsiuio.socket. Jul 12 00:40:52.971738 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:40:52.971784 systemd[1]: Stopped ignition-setup.service. Jul 12 00:40:52.981600 systemd[1]: Stopping systemd-networkd.service... Jul 12 00:40:53.011880 systemd[1]: Stopping systemd-resolved.service... Jul 12 00:40:53.497582 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Jul 12 00:40:53.497628 iscsid[879]: iscsid shutting down. Jul 12 00:40:53.018275 systemd-networkd[868]: eth0: DHCPv6 lease lost Jul 12 00:40:53.497000 audit: BPF prog-id=9 op=UNLOAD Jul 12 00:40:53.023223 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:40:53.023339 systemd[1]: Stopped systemd-networkd.service. Jul 12 00:40:53.049943 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:40:53.050023 systemd[1]: Finished initrd-cleanup.service. Jul 12 00:40:53.058429 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:40:53.058544 systemd[1]: Stopped systemd-resolved.service. Jul 12 00:40:53.083422 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:40:53.083470 systemd[1]: Closed systemd-networkd.socket. Jul 12 00:40:53.115948 systemd[1]: Stopping network-cleanup.service... Jul 12 00:40:53.136851 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:40:53.136939 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 12 00:40:53.151631 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:40:53.151693 systemd[1]: Stopped systemd-sysctl.service. Jul 12 00:40:53.199094 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:40:53.199157 systemd[1]: Stopped systemd-modules-load.service. Jul 12 00:40:53.231873 systemd[1]: Stopping systemd-udevd.service... Jul 12 00:40:53.242590 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 12 00:40:53.247377 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:40:53.247545 systemd[1]: Stopped systemd-udevd.service. Jul 12 00:40:53.258623 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:40:53.258666 systemd[1]: Closed systemd-udevd-control.socket. Jul 12 00:40:53.285355 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:40:53.285417 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 12 00:40:53.302068 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:40:53.302138 systemd[1]: Stopped dracut-pre-udev.service. Jul 12 00:40:53.314793 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:40:53.314858 systemd[1]: Stopped dracut-cmdline.service. Jul 12 00:40:53.319930 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:40:53.319978 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 12 00:40:53.332259 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 12 00:40:53.352148 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:40:53.352246 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 12 00:40:53.365976 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:40:53.366038 systemd[1]: Stopped kmod-static-nodes.service. Jul 12 00:40:53.370651 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:40:53.370695 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 12 00:40:53.380204 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 12 00:40:53.380769 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:40:53.380864 systemd[1]: Stopped sysroot-boot.service. Jul 12 00:40:53.387355 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:40:53.387444 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 12 00:40:53.404358 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:40:53.404421 systemd[1]: Stopped initrd-setup-root.service. Jul 12 00:40:53.417538 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:40:53.417638 systemd[1]: Stopped network-cleanup.service. Jul 12 00:40:53.427555 systemd[1]: Reached target initrd-switch-root.target. Jul 12 00:40:53.437975 systemd[1]: Starting initrd-switch-root.service... Jul 12 00:40:53.455520 systemd[1]: Switching root. Jul 12 00:40:53.498970 systemd-journald[276]: Journal stopped Jul 12 00:41:13.891946 kernel: SELinux: Class mctp_socket not defined in policy. Jul 12 00:41:13.891967 kernel: SELinux: Class anon_inode not defined in policy. Jul 12 00:41:13.891977 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 12 00:41:13.891987 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:41:13.891995 kernel: SELinux: policy capability open_perms=1 Jul 12 00:41:13.892003 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:41:13.892012 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:41:13.892020 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:41:13.892028 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:41:13.892036 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:41:13.892044 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:41:13.892055 systemd[1]: Successfully loaded SELinux policy in 436.011ms. Jul 12 00:41:13.892065 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 32.444ms. Jul 12 00:41:13.892075 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 12 00:41:13.892085 systemd[1]: Detected virtualization microsoft. Jul 12 00:41:13.892095 systemd[1]: Detected architecture arm64. Jul 12 00:41:13.892104 systemd[1]: Detected first boot. Jul 12 00:41:13.892113 systemd[1]: Hostname set to . Jul 12 00:41:13.892122 systemd[1]: Initializing machine ID from random generator. Jul 12 00:41:13.892131 kernel: kauditd_printk_skb: 13 callbacks suppressed Jul 12 00:41:13.892140 kernel: audit: type=1400 audit(1752280858.695:82): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 12 00:41:13.892150 kernel: audit: type=1400 audit(1752280858.695:83): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 12 00:41:13.892160 kernel: audit: type=1334 audit(1752280858.701:84): prog-id=10 op=LOAD Jul 12 00:41:13.892169 kernel: audit: type=1334 audit(1752280858.701:85): prog-id=10 op=UNLOAD Jul 12 00:41:13.892177 kernel: audit: type=1334 audit(1752280858.722:86): prog-id=11 op=LOAD Jul 12 00:41:13.892185 kernel: audit: type=1334 audit(1752280858.722:87): prog-id=11 op=UNLOAD Jul 12 00:41:13.892194 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 12 00:41:13.892203 kernel: audit: type=1400 audit(1752280861.146:88): avc: denied { associate } for pid=1102 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 12 00:41:13.892213 kernel: audit: type=1300 audit(1752280861.146:88): arch=c00000b7 syscall=5 success=yes exit=0 a0=400014588c a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1085 pid=1102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:41:13.892224 kernel: audit: type=1327 audit(1752280861.146:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 12 00:41:13.892233 kernel: audit: type=1400 audit(1752280861.155:89): avc: denied { associate } for pid=1102 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 12 00:41:13.892242 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:41:13.892251 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:41:13.892261 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:41:13.892272 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:41:13.892282 kernel: kauditd_printk_skb: 5 callbacks suppressed Jul 12 00:41:13.892290 kernel: audit: type=1334 audit(1752280873.047:90): prog-id=12 op=LOAD Jul 12 00:41:13.892299 kernel: audit: type=1334 audit(1752280873.047:91): prog-id=3 op=UNLOAD Jul 12 00:41:13.892307 systemd[1]: iscsid.service: Deactivated successfully. Jul 12 00:41:13.892316 kernel: audit: type=1334 audit(1752280873.053:92): prog-id=13 op=LOAD Jul 12 00:41:13.892327 kernel: audit: type=1334 audit(1752280873.058:93): prog-id=14 op=LOAD Jul 12 00:41:13.892336 kernel: audit: type=1334 audit(1752280873.058:94): prog-id=4 op=UNLOAD Jul 12 00:41:13.892345 kernel: audit: type=1334 audit(1752280873.058:95): prog-id=5 op=UNLOAD Jul 12 00:41:13.892353 systemd[1]: Stopped iscsid.service. Jul 12 00:41:13.892364 kernel: audit: type=1131 audit(1752280873.060:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.892373 kernel: audit: type=1334 audit(1752280873.091:97): prog-id=12 op=UNLOAD Jul 12 00:41:13.892382 kernel: audit: type=1131 audit(1752280873.127:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.892391 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:41:13.892403 systemd[1]: Stopped initrd-switch-root.service. Jul 12 00:41:13.892412 kernel: audit: type=1130 audit(1752280873.159:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.892423 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:41:13.892432 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 12 00:41:13.892441 systemd[1]: Created slice system-addon\x2drun.slice. Jul 12 00:41:13.892451 systemd[1]: Created slice system-getty.slice. Jul 12 00:41:13.892460 systemd[1]: Created slice system-modprobe.slice. Jul 12 00:41:13.892469 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 12 00:41:13.892478 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 12 00:41:13.892488 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 12 00:41:13.892508 systemd[1]: Created slice user.slice. Jul 12 00:41:13.892519 systemd[1]: Started systemd-ask-password-console.path. Jul 12 00:41:13.892528 systemd[1]: Started systemd-ask-password-wall.path. Jul 12 00:41:13.892538 systemd[1]: Set up automount boot.automount. Jul 12 00:41:13.892547 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 12 00:41:13.892556 systemd[1]: Stopped target initrd-switch-root.target. Jul 12 00:41:13.892565 systemd[1]: Stopped target initrd-fs.target. Jul 12 00:41:13.892574 systemd[1]: Stopped target initrd-root-fs.target. Jul 12 00:41:13.892584 systemd[1]: Reached target integritysetup.target. Jul 12 00:41:13.892594 systemd[1]: Reached target remote-cryptsetup.target. Jul 12 00:41:13.892603 systemd[1]: Reached target remote-fs.target. Jul 12 00:41:13.892613 systemd[1]: Reached target slices.target. Jul 12 00:41:13.892623 systemd[1]: Reached target swap.target. Jul 12 00:41:13.892632 systemd[1]: Reached target torcx.target. Jul 12 00:41:13.892641 systemd[1]: Reached target veritysetup.target. Jul 12 00:41:13.892650 systemd[1]: Listening on systemd-coredump.socket. Jul 12 00:41:13.892661 systemd[1]: Listening on systemd-initctl.socket. Jul 12 00:41:13.892670 systemd[1]: Listening on systemd-networkd.socket. Jul 12 00:41:13.892679 systemd[1]: Listening on systemd-udevd-control.socket. Jul 12 00:41:13.892689 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 12 00:41:13.892698 systemd[1]: Listening on systemd-userdbd.socket. Jul 12 00:41:13.892708 systemd[1]: Mounting dev-hugepages.mount... Jul 12 00:41:13.892718 systemd[1]: Mounting dev-mqueue.mount... Jul 12 00:41:13.892728 systemd[1]: Mounting media.mount... Jul 12 00:41:13.892737 systemd[1]: Mounting sys-kernel-debug.mount... Jul 12 00:41:13.892746 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 12 00:41:13.892755 systemd[1]: Mounting tmp.mount... Jul 12 00:41:13.892765 systemd[1]: Starting flatcar-tmpfiles.service... Jul 12 00:41:13.892774 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:41:13.892783 systemd[1]: Starting kmod-static-nodes.service... Jul 12 00:41:13.892793 systemd[1]: Starting modprobe@configfs.service... Jul 12 00:41:13.892803 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:41:13.892813 systemd[1]: Starting modprobe@drm.service... Jul 12 00:41:13.892823 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:41:13.892832 systemd[1]: Starting modprobe@fuse.service... Jul 12 00:41:13.892841 systemd[1]: Starting modprobe@loop.service... Jul 12 00:41:13.892851 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:41:13.892860 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:41:13.892869 systemd[1]: Stopped systemd-fsck-root.service. Jul 12 00:41:13.892879 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:41:13.892889 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:41:13.892899 systemd[1]: Stopped systemd-journald.service. Jul 12 00:41:13.892908 systemd[1]: systemd-journald.service: Consumed 3.086s CPU time. Jul 12 00:41:13.892917 systemd[1]: Starting systemd-journald.service... Jul 12 00:41:13.892927 systemd[1]: Starting systemd-modules-load.service... Jul 12 00:41:13.892936 systemd[1]: Starting systemd-network-generator.service... Jul 12 00:41:13.892945 kernel: loop: module loaded Jul 12 00:41:13.892954 systemd[1]: Starting systemd-remount-fs.service... Jul 12 00:41:13.892963 systemd[1]: Starting systemd-udev-trigger.service... Jul 12 00:41:13.892974 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:41:13.892983 systemd[1]: Stopped verity-setup.service. Jul 12 00:41:13.892993 systemd[1]: Mounted dev-hugepages.mount. Jul 12 00:41:13.893002 systemd[1]: Mounted dev-mqueue.mount. Jul 12 00:41:13.893012 systemd[1]: Mounted media.mount. Jul 12 00:41:13.893022 systemd[1]: Mounted sys-kernel-debug.mount. Jul 12 00:41:13.893031 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 12 00:41:13.893040 systemd[1]: Mounted tmp.mount. Jul 12 00:41:13.893049 systemd[1]: Finished flatcar-tmpfiles.service. Jul 12 00:41:13.893060 systemd[1]: Finished kmod-static-nodes.service. Jul 12 00:41:13.893070 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:41:13.893082 systemd[1]: Finished modprobe@configfs.service. Jul 12 00:41:13.893093 kernel: fuse: init (API version 7.34) Jul 12 00:41:13.893102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:41:13.893112 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:41:13.893121 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:41:13.893134 systemd-journald[1192]: Journal started Jul 12 00:41:13.893171 systemd-journald[1192]: Runtime Journal (/run/log/journal/7f0011daa6714b85a35f3b60f8570ed2) is 8.0M, max 78.5M, 70.5M free. Jul 12 00:40:57.623000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:40:58.695000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 12 00:40:58.695000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 12 00:40:58.701000 audit: BPF prog-id=10 op=LOAD Jul 12 00:40:58.701000 audit: BPF prog-id=10 op=UNLOAD Jul 12 00:40:58.722000 audit: BPF prog-id=11 op=LOAD Jul 12 00:40:58.722000 audit: BPF prog-id=11 op=UNLOAD Jul 12 00:41:01.146000 audit[1102]: AVC avc: denied { associate } for pid=1102 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 12 00:41:01.146000 audit[1102]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=400014588c a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1085 pid=1102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:41:01.146000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 12 00:41:01.155000 audit[1102]: AVC avc: denied { associate } for pid=1102 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 12 00:41:01.155000 audit[1102]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145965 a2=1ed a3=0 items=2 ppid=1085 pid=1102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:41:01.155000 audit: CWD cwd="/" Jul 12 00:41:01.155000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:41:01.155000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:41:01.155000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 12 00:41:13.047000 audit: BPF prog-id=12 op=LOAD Jul 12 00:41:13.047000 audit: BPF prog-id=3 op=UNLOAD Jul 12 00:41:13.053000 audit: BPF prog-id=13 op=LOAD Jul 12 00:41:13.058000 audit: BPF prog-id=14 op=LOAD Jul 12 00:41:13.058000 audit: BPF prog-id=4 op=UNLOAD Jul 12 00:41:13.058000 audit: BPF prog-id=5 op=UNLOAD Jul 12 00:41:13.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.091000 audit: BPF prog-id=12 op=UNLOAD Jul 12 00:41:13.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.674000 audit: BPF prog-id=15 op=LOAD Jul 12 00:41:13.674000 audit: BPF prog-id=16 op=LOAD Jul 12 00:41:13.674000 audit: BPF prog-id=17 op=LOAD Jul 12 00:41:13.674000 audit: BPF prog-id=13 op=UNLOAD Jul 12 00:41:13.674000 audit: BPF prog-id=14 op=UNLOAD Jul 12 00:41:13.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.889000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 12 00:41:13.889000 audit[1192]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff5ae61b0 a2=4000 a3=1 items=0 ppid=1 pid=1192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:41:13.889000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 12 00:41:13.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:01.112278 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:41:13.046309 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:41:01.112637 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 12 00:41:13.046321 systemd[1]: Unnecessary job was removed for dev-sda6.device. Jul 12 00:41:01.112656 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 12 00:41:13.060191 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:41:01.112693 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:01Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 12 00:41:13.060549 systemd[1]: systemd-journald.service: Consumed 3.086s CPU time. Jul 12 00:41:01.112703 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:01Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 12 00:41:01.112733 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:01Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 12 00:41:01.112744 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:01Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 12 00:41:01.112941 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:01Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 12 00:41:01.112973 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 12 00:41:01.112984 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 12 00:41:01.127129 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 12 00:41:01.127164 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 12 00:41:01.127185 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Jul 12 00:41:01.127199 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 12 00:41:01.127218 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Jul 12 00:41:01.127231 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 12 00:41:11.500348 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:11Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 12 00:41:11.500656 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:11Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 12 00:41:11.500773 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:11Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 12 00:41:11.500933 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:11Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 12 00:41:11.500983 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:11Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 12 00:41:11.501039 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2025-07-12T00:41:11Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 12 00:41:13.904788 systemd[1]: Finished modprobe@drm.service. Jul 12 00:41:13.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.914429 systemd[1]: Started systemd-journald.service. Jul 12 00:41:13.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.915304 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:41:13.915449 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:41:13.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.920513 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:41:13.920640 systemd[1]: Finished modprobe@fuse.service. Jul 12 00:41:13.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.925355 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:41:13.925478 systemd[1]: Finished modprobe@loop.service. Jul 12 00:41:13.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.930397 systemd[1]: Finished systemd-network-generator.service. Jul 12 00:41:13.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.935879 systemd[1]: Finished systemd-remount-fs.service. Jul 12 00:41:13.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:13.941012 systemd[1]: Reached target network-pre.target. Jul 12 00:41:13.946794 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 12 00:41:13.952115 systemd[1]: Mounting sys-kernel-config.mount... Jul 12 00:41:13.955956 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:41:13.971280 systemd[1]: Starting systemd-hwdb-update.service... Jul 12 00:41:13.977124 systemd[1]: Starting systemd-journal-flush.service... Jul 12 00:41:13.981415 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:41:13.982450 systemd[1]: Starting systemd-random-seed.service... Jul 12 00:41:13.986777 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:41:13.987848 systemd[1]: Starting systemd-sysusers.service... Jul 12 00:41:13.994155 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 12 00:41:13.999510 systemd[1]: Finished systemd-udev-trigger.service. Jul 12 00:41:14.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:14.004412 systemd[1]: Mounted sys-kernel-config.mount. Jul 12 00:41:14.010056 systemd[1]: Starting systemd-udev-settle.service... Jul 12 00:41:14.016925 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 12 00:41:14.026994 systemd[1]: Finished systemd-modules-load.service. Jul 12 00:41:14.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:14.033166 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:41:14.039324 systemd[1]: Finished systemd-random-seed.service. Jul 12 00:41:14.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:14.044237 systemd[1]: Reached target first-boot-complete.target. Jul 12 00:41:14.144402 systemd-journald[1192]: Time spent on flushing to /var/log/journal/7f0011daa6714b85a35f3b60f8570ed2 is 15.794ms for 1112 entries. Jul 12 00:41:14.144402 systemd-journald[1192]: System Journal (/var/log/journal/7f0011daa6714b85a35f3b60f8570ed2) is 8.0M, max 2.6G, 2.6G free. Jul 12 00:41:14.300949 systemd-journald[1192]: Received client request to flush runtime journal. Jul 12 00:41:14.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:14.227871 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:41:14.301988 systemd[1]: Finished systemd-journal-flush.service. Jul 12 00:41:14.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:15.054358 systemd[1]: Finished systemd-sysusers.service. Jul 12 00:41:15.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:15.060291 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 12 00:41:15.876920 systemd[1]: Finished systemd-hwdb-update.service. Jul 12 00:41:15.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:15.899503 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 12 00:41:15.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:15.904000 audit: BPF prog-id=18 op=LOAD Jul 12 00:41:15.905000 audit: BPF prog-id=19 op=LOAD Jul 12 00:41:15.905000 audit: BPF prog-id=7 op=UNLOAD Jul 12 00:41:15.905000 audit: BPF prog-id=8 op=UNLOAD Jul 12 00:41:15.906230 systemd[1]: Starting systemd-udevd.service... Jul 12 00:41:15.924128 systemd-udevd[1227]: Using default interface naming scheme 'v252'. Jul 12 00:41:16.191169 systemd[1]: Started systemd-udevd.service. Jul 12 00:41:16.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:16.202000 audit: BPF prog-id=20 op=LOAD Jul 12 00:41:16.203985 systemd[1]: Starting systemd-networkd.service... Jul 12 00:41:16.237073 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Jul 12 00:41:16.294000 audit: BPF prog-id=21 op=LOAD Jul 12 00:41:16.294000 audit: BPF prog-id=22 op=LOAD Jul 12 00:41:16.294000 audit: BPF prog-id=23 op=LOAD Jul 12 00:41:16.296141 systemd[1]: Starting systemd-userdbd.service... Jul 12 00:41:16.313006 kernel: mousedev: PS/2 mouse device common for all mice Jul 12 00:41:16.354720 kernel: hv_vmbus: registering driver hyperv_fb Jul 12 00:41:16.354836 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 12 00:41:16.354866 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 12 00:41:16.343355 systemd[1]: Started systemd-userdbd.service. Jul 12 00:41:16.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:16.363875 kernel: Console: switching to colour dummy device 80x25 Jul 12 00:41:16.358000 audit[1230]: AVC avc: denied { confidentiality } for pid=1230 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 12 00:41:16.369574 kernel: hv_vmbus: registering driver hv_balloon Jul 12 00:41:16.369666 kernel: Console: switching to colour frame buffer device 128x48 Jul 12 00:41:16.383055 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 12 00:41:16.383243 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 12 00:41:16.358000 audit[1230]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaadcd79700 a1=aa2c a2=ffffb44724b0 a3=aaaadccd9010 items=12 ppid=1227 pid=1230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:41:16.358000 audit: CWD cwd="/" Jul 12 00:41:16.358000 audit: PATH item=0 name=(null) inode=7171 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:41:16.358000 audit: PATH item=1 name=(null) inode=9927 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:41:16.358000 audit: PATH item=2 name=(null) inode=9927 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:41:16.358000 audit: PATH item=3 name=(null) inode=9928 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:41:16.358000 audit: PATH item=4 name=(null) inode=9927 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:41:16.358000 audit: PATH item=5 name=(null) inode=9929 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:41:16.358000 audit: PATH item=6 name=(null) inode=9927 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:41:16.358000 audit: PATH item=7 name=(null) inode=9930 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:41:16.358000 audit: PATH item=8 name=(null) inode=9927 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:41:16.358000 audit: PATH item=9 name=(null) inode=9931 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:41:16.358000 audit: PATH item=10 name=(null) inode=9927 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:41:16.358000 audit: PATH item=11 name=(null) inode=9932 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:41:16.358000 audit: PROCTITLE proctitle="(udev-worker)" Jul 12 00:41:16.401445 kernel: hv_utils: Registering HyperV Utility Driver Jul 12 00:41:16.401572 kernel: hv_vmbus: registering driver hv_utils Jul 12 00:41:16.408745 kernel: hv_utils: Heartbeat IC version 3.0 Jul 12 00:41:16.408858 kernel: hv_utils: Shutdown IC version 3.2 Jul 12 00:41:15.950386 kernel: hv_utils: TimeSync IC version 4.0 Jul 12 00:41:16.011697 systemd-journald[1192]: Time jumped backwards, rotating. Jul 12 00:41:16.467091 systemd-networkd[1248]: lo: Link UP Jul 12 00:41:16.467107 systemd-networkd[1248]: lo: Gained carrier Jul 12 00:41:16.467589 systemd-networkd[1248]: Enumeration completed Jul 12 00:41:16.467711 systemd[1]: Started systemd-networkd.service. Jul 12 00:41:16.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:16.473562 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 12 00:41:16.561061 systemd-networkd[1248]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:41:16.615298 kernel: mlx5_core 4ae1:00:02.0 enP19169s1: Link up Jul 12 00:41:16.643313 kernel: hv_netvsc 000d3af7-8473-000d-3af7-8473000d3af7 eth0: Data path switched to VF: enP19169s1 Jul 12 00:41:16.644629 systemd-networkd[1248]: enP19169s1: Link UP Jul 12 00:41:16.644859 systemd-networkd[1248]: eth0: Link UP Jul 12 00:41:16.644923 systemd-networkd[1248]: eth0: Gained carrier Jul 12 00:41:16.647699 systemd-networkd[1248]: enP19169s1: Gained carrier Jul 12 00:41:16.653444 systemd-networkd[1248]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 12 00:41:16.657084 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 12 00:41:16.662741 systemd[1]: Finished systemd-udev-settle.service. Jul 12 00:41:16.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:16.668899 systemd[1]: Starting lvm2-activation-early.service... Jul 12 00:41:17.148557 lvm[1306]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:41:17.206301 systemd[1]: Finished lvm2-activation-early.service. Jul 12 00:41:17.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:17.211333 systemd[1]: Reached target cryptsetup.target. Jul 12 00:41:17.217027 systemd[1]: Starting lvm2-activation.service... Jul 12 00:41:17.221549 lvm[1307]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:41:17.247360 systemd[1]: Finished lvm2-activation.service. Jul 12 00:41:17.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:17.252097 systemd[1]: Reached target local-fs-pre.target. Jul 12 00:41:17.256606 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:41:17.256640 systemd[1]: Reached target local-fs.target. Jul 12 00:41:17.260841 systemd[1]: Reached target machines.target. Jul 12 00:41:17.266676 systemd[1]: Starting ldconfig.service... Jul 12 00:41:17.283082 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:41:17.283171 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:41:17.284464 systemd[1]: Starting systemd-boot-update.service... Jul 12 00:41:17.289919 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 12 00:41:17.297072 systemd[1]: Starting systemd-machine-id-commit.service... Jul 12 00:41:17.303118 systemd[1]: Starting systemd-sysext.service... Jul 12 00:41:17.353002 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1309 (bootctl) Jul 12 00:41:17.354442 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 12 00:41:17.638590 systemd[1]: Unmounting usr-share-oem.mount... Jul 12 00:41:17.712647 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:41:17.713554 systemd[1]: Finished systemd-machine-id-commit.service. Jul 12 00:41:17.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:17.722895 kernel: kauditd_printk_skb: 69 callbacks suppressed Jul 12 00:41:17.722992 kernel: audit: type=1130 audit(1752280877.717:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:17.747480 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 12 00:41:17.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:17.771417 kernel: audit: type=1130 audit(1752280877.752:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:17.780630 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 12 00:41:17.780971 systemd[1]: Unmounted usr-share-oem.mount. Jul 12 00:41:17.849293 kernel: loop0: detected capacity change from 0 to 207008 Jul 12 00:41:17.901293 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:41:17.927301 kernel: loop1: detected capacity change from 0 to 207008 Jul 12 00:41:17.933099 (sd-sysext)[1321]: Using extensions 'kubernetes'. Jul 12 00:41:17.933505 (sd-sysext)[1321]: Merged extensions into '/usr'. Jul 12 00:41:17.952248 systemd[1]: Mounting usr-share-oem.mount... Jul 12 00:41:17.957072 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:41:17.958496 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:41:17.963689 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:41:17.969312 systemd[1]: Starting modprobe@loop.service... Jul 12 00:41:17.973149 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:41:17.973306 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:41:17.975674 systemd[1]: Mounted usr-share-oem.mount. Jul 12 00:41:17.980131 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:41:17.980324 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:41:17.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:17.984982 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:41:17.985101 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:41:17.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.021243 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:41:18.021532 systemd[1]: Finished modprobe@loop.service. Jul 12 00:41:18.021706 kernel: audit: type=1130 audit(1752280877.983:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.021760 kernel: audit: type=1131 audit(1752280877.983:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.042683 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:41:18.042913 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:41:18.043115 kernel: audit: type=1130 audit(1752280878.019:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.044287 kernel: audit: type=1131 audit(1752280878.019:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.044630 systemd[1]: Finished systemd-sysext.service. Jul 12 00:41:18.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.090634 kernel: audit: type=1130 audit(1752280878.040:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.091051 systemd[1]: Starting ensure-sysext.service... Jul 12 00:41:18.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.112608 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 12 00:41:18.137511 kernel: audit: type=1131 audit(1752280878.040:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.137616 kernel: audit: type=1130 audit(1752280878.061:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.139937 systemd[1]: Reloading. Jul 12 00:41:18.146687 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 12 00:41:18.148717 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:41:18.176162 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:41:18.205480 /usr/lib/systemd/system-generators/torcx-generator[1348]: time="2025-07-12T00:41:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:41:18.205514 /usr/lib/systemd/system-generators/torcx-generator[1348]: time="2025-07-12T00:41:18Z" level=info msg="torcx already run" Jul 12 00:41:18.254802 systemd-fsck[1317]: fsck.fat 4.2 (2021-01-31) Jul 12 00:41:18.254802 systemd-fsck[1317]: /dev/sda1: 236 files, 117310/258078 clusters Jul 12 00:41:18.300593 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:41:18.300617 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:41:18.318214 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:41:18.385000 audit: BPF prog-id=24 op=LOAD Jul 12 00:41:18.385000 audit: BPF prog-id=21 op=UNLOAD Jul 12 00:41:18.391000 audit: BPF prog-id=25 op=LOAD Jul 12 00:41:18.391000 audit: BPF prog-id=26 op=LOAD Jul 12 00:41:18.391000 audit: BPF prog-id=22 op=UNLOAD Jul 12 00:41:18.391000 audit: BPF prog-id=23 op=UNLOAD Jul 12 00:41:18.392000 audit: BPF prog-id=27 op=LOAD Jul 12 00:41:18.392000 audit: BPF prog-id=15 op=UNLOAD Jul 12 00:41:18.392000 audit: BPF prog-id=28 op=LOAD Jul 12 00:41:18.392000 audit: BPF prog-id=29 op=LOAD Jul 12 00:41:18.392000 audit: BPF prog-id=16 op=UNLOAD Jul 12 00:41:18.392000 audit: BPF prog-id=17 op=UNLOAD Jul 12 00:41:18.394286 kernel: audit: type=1334 audit(1752280878.385:161): prog-id=24 op=LOAD Jul 12 00:41:18.394000 audit: BPF prog-id=30 op=LOAD Jul 12 00:41:18.394000 audit: BPF prog-id=31 op=LOAD Jul 12 00:41:18.394000 audit: BPF prog-id=18 op=UNLOAD Jul 12 00:41:18.394000 audit: BPF prog-id=19 op=UNLOAD Jul 12 00:41:18.395000 audit: BPF prog-id=32 op=LOAD Jul 12 00:41:18.395000 audit: BPF prog-id=20 op=UNLOAD Jul 12 00:41:18.404611 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 12 00:41:18.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.416589 systemd[1]: Mounting boot.mount... Jul 12 00:41:18.423853 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:41:18.425214 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:41:18.430755 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:41:18.436647 systemd[1]: Starting modprobe@loop.service... Jul 12 00:41:18.440896 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:41:18.441032 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:41:18.443344 systemd[1]: Mounted boot.mount. Jul 12 00:41:18.451313 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:41:18.451450 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:41:18.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.456828 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:41:18.456959 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:41:18.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.462439 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:41:18.462583 systemd[1]: Finished modprobe@loop.service. Jul 12 00:41:18.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.468119 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:41:18.468212 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:41:18.468898 systemd[1]: Finished systemd-boot-update.service. Jul 12 00:41:18.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.476109 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:41:18.477452 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:41:18.483054 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:41:18.488863 systemd[1]: Starting modprobe@loop.service... Jul 12 00:41:18.492967 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:41:18.493104 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:41:18.493942 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:41:18.494090 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:41:18.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.499939 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:41:18.500082 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:41:18.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.505751 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:41:18.505876 systemd[1]: Finished modprobe@loop.service. Jul 12 00:41:18.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.513301 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:41:18.514689 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:41:18.520236 systemd[1]: Starting modprobe@drm.service... Jul 12 00:41:18.525531 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:41:18.531364 systemd[1]: Starting modprobe@loop.service... Jul 12 00:41:18.535640 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:41:18.535769 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:41:18.536729 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:41:18.536870 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:41:18.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.542188 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:41:18.542338 systemd[1]: Finished modprobe@drm.service. Jul 12 00:41:18.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.547263 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:41:18.547405 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:41:18.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.553008 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:41:18.553133 systemd[1]: Finished modprobe@loop.service. Jul 12 00:41:18.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.558604 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:41:18.558677 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:41:18.559693 systemd[1]: Finished ensure-sysext.service. Jul 12 00:41:18.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.696447 systemd-networkd[1248]: eth0: Gained IPv6LL Jul 12 00:41:18.702209 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 12 00:41:18.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.889957 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 12 00:41:18.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.896779 systemd[1]: Starting audit-rules.service... Jul 12 00:41:18.902126 systemd[1]: Starting clean-ca-certificates.service... Jul 12 00:41:18.908455 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 12 00:41:18.913000 audit: BPF prog-id=33 op=LOAD Jul 12 00:41:18.915836 systemd[1]: Starting systemd-resolved.service... Jul 12 00:41:18.920000 audit: BPF prog-id=34 op=LOAD Jul 12 00:41:18.922818 systemd[1]: Starting systemd-timesyncd.service... Jul 12 00:41:18.928986 systemd[1]: Starting systemd-update-utmp.service... Jul 12 00:41:18.966000 audit[1429]: SYSTEM_BOOT pid=1429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.968585 systemd[1]: Finished clean-ca-certificates.service. Jul 12 00:41:18.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.975302 systemd[1]: Finished systemd-update-utmp.service. Jul 12 00:41:18.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:18.981976 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:41:19.027458 systemd[1]: Started systemd-timesyncd.service. Jul 12 00:41:19.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:19.032231 systemd[1]: Reached target time-set.target. Jul 12 00:41:19.062150 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 12 00:41:19.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:19.340938 systemd-resolved[1426]: Positive Trust Anchors: Jul 12 00:41:19.341349 systemd-resolved[1426]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:41:19.341436 systemd-resolved[1426]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 12 00:41:19.547080 systemd-timesyncd[1428]: Contacted time server 23.186.168.129:123 (0.flatcar.pool.ntp.org). Jul 12 00:41:19.547149 systemd-timesyncd[1428]: Initial clock synchronization to Sat 2025-07-12 00:41:19.546973 UTC. Jul 12 00:41:19.580587 systemd-resolved[1426]: Using system hostname 'ci-3510.3.7-n-2c4241d00d'. Jul 12 00:41:19.582347 systemd[1]: Started systemd-resolved.service. Jul 12 00:41:19.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:41:19.587048 systemd[1]: Reached target network.target. Jul 12 00:41:19.591242 systemd[1]: Reached target network-online.target. Jul 12 00:41:19.595753 systemd[1]: Reached target nss-lookup.target. Jul 12 00:41:19.880000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 12 00:41:19.880000 audit[1444]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffed35910 a2=420 a3=0 items=0 ppid=1423 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:41:19.880000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 12 00:41:19.897248 augenrules[1444]: No rules Jul 12 00:41:19.898384 systemd[1]: Finished audit-rules.service. Jul 12 00:41:31.911919 ldconfig[1308]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:41:31.932143 systemd[1]: Finished ldconfig.service. Jul 12 00:41:31.938062 systemd[1]: Starting systemd-update-done.service... Jul 12 00:41:31.992627 systemd[1]: Finished systemd-update-done.service. Jul 12 00:41:31.997535 systemd[1]: Reached target sysinit.target. Jul 12 00:41:32.001874 systemd[1]: Started motdgen.path. Jul 12 00:41:32.005559 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 12 00:41:32.011781 systemd[1]: Started logrotate.timer. Jul 12 00:41:32.015717 systemd[1]: Started mdadm.timer. Jul 12 00:41:32.019240 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 12 00:41:32.023948 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:41:32.023980 systemd[1]: Reached target paths.target. Jul 12 00:41:32.027991 systemd[1]: Reached target timers.target. Jul 12 00:41:32.032530 systemd[1]: Listening on dbus.socket. Jul 12 00:41:32.037567 systemd[1]: Starting docker.socket... Jul 12 00:41:32.055537 systemd[1]: Listening on sshd.socket. Jul 12 00:41:32.060774 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:41:32.061351 systemd[1]: Listening on docker.socket. Jul 12 00:41:32.065835 systemd[1]: Reached target sockets.target. Jul 12 00:41:32.070337 systemd[1]: Reached target basic.target. Jul 12 00:41:32.074474 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 12 00:41:32.074505 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 12 00:41:32.075728 systemd[1]: Starting containerd.service... Jul 12 00:41:32.080512 systemd[1]: Starting dbus.service... Jul 12 00:41:32.085502 systemd[1]: Starting enable-oem-cloudinit.service... Jul 12 00:41:32.091484 systemd[1]: Starting extend-filesystems.service... Jul 12 00:41:32.098647 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 12 00:41:32.100331 systemd[1]: Starting kubelet.service... Jul 12 00:41:32.105046 systemd[1]: Starting motdgen.service... Jul 12 00:41:32.109587 systemd[1]: Started nvidia.service. Jul 12 00:41:32.114815 systemd[1]: Starting prepare-helm.service... Jul 12 00:41:32.119783 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 12 00:41:32.125453 systemd[1]: Starting sshd-keygen.service... Jul 12 00:41:32.133484 systemd[1]: Starting systemd-logind.service... Jul 12 00:41:32.137961 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:41:32.138033 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:41:32.138535 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:41:32.139295 systemd[1]: Starting update-engine.service... Jul 12 00:41:32.144341 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 12 00:41:32.156435 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:41:32.156625 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 12 00:41:32.177516 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:41:32.177706 systemd[1]: Finished motdgen.service. Jul 12 00:41:32.229674 env[1478]: time="2025-07-12T00:41:32.229618482Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 12 00:41:32.250068 env[1478]: time="2025-07-12T00:41:32.249984847Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:41:32.250203 env[1478]: time="2025-07-12T00:41:32.250176407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:41:32.251688 env[1478]: time="2025-07-12T00:41:32.251648887Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:41:32.251791 env[1478]: time="2025-07-12T00:41:32.251775047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:41:32.252146 env[1478]: time="2025-07-12T00:41:32.252120367Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:41:32.252231 env[1478]: time="2025-07-12T00:41:32.252216967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:41:32.252325 env[1478]: time="2025-07-12T00:41:32.252306487Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 12 00:41:32.252387 env[1478]: time="2025-07-12T00:41:32.252372327Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:41:32.252545 env[1478]: time="2025-07-12T00:41:32.252525767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:41:32.252882 env[1478]: time="2025-07-12T00:41:32.252824087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:41:32.253130 env[1478]: time="2025-07-12T00:41:32.253104927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:41:32.253203 env[1478]: time="2025-07-12T00:41:32.253188727Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:41:32.253376 env[1478]: time="2025-07-12T00:41:32.253352367Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 12 00:41:32.253467 env[1478]: time="2025-07-12T00:41:32.253452207Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:41:32.262081 extend-filesystems[1455]: Found loop1 Jul 12 00:41:32.266355 extend-filesystems[1455]: Found sda Jul 12 00:41:32.266355 extend-filesystems[1455]: Found sda1 Jul 12 00:41:32.266355 extend-filesystems[1455]: Found sda2 Jul 12 00:41:32.266355 extend-filesystems[1455]: Found sda3 Jul 12 00:41:32.266355 extend-filesystems[1455]: Found usr Jul 12 00:41:32.266355 extend-filesystems[1455]: Found sda4 Jul 12 00:41:32.266355 extend-filesystems[1455]: Found sda6 Jul 12 00:41:32.266355 extend-filesystems[1455]: Found sda7 Jul 12 00:41:32.266355 extend-filesystems[1455]: Found sda9 Jul 12 00:41:32.266355 extend-filesystems[1455]: Checking size of /dev/sda9 Jul 12 00:41:32.315567 jq[1454]: false Jul 12 00:41:32.291581 systemd[1]: Started containerd.service. Jul 12 00:41:32.315847 env[1478]: time="2025-07-12T00:41:32.288023095Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:41:32.315847 env[1478]: time="2025-07-12T00:41:32.288069255Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:41:32.315847 env[1478]: time="2025-07-12T00:41:32.288083575Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:41:32.315847 env[1478]: time="2025-07-12T00:41:32.288118895Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:41:32.315847 env[1478]: time="2025-07-12T00:41:32.288134735Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:41:32.315847 env[1478]: time="2025-07-12T00:41:32.288149215Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:41:32.315847 env[1478]: time="2025-07-12T00:41:32.288163295Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:41:32.315847 env[1478]: time="2025-07-12T00:41:32.288519575Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:41:32.315847 env[1478]: time="2025-07-12T00:41:32.288540495Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 12 00:41:32.315847 env[1478]: time="2025-07-12T00:41:32.288553655Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:41:32.315847 env[1478]: time="2025-07-12T00:41:32.288565535Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:41:32.315847 env[1478]: time="2025-07-12T00:41:32.288578055Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:41:32.315847 env[1478]: time="2025-07-12T00:41:32.288715775Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:41:32.315847 env[1478]: time="2025-07-12T00:41:32.288803775Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:41:32.316106 jq[1473]: true Jul 12 00:41:32.316237 env[1478]: time="2025-07-12T00:41:32.289022775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:41:32.316237 env[1478]: time="2025-07-12T00:41:32.289047655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:41:32.316237 env[1478]: time="2025-07-12T00:41:32.289060335Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:41:32.316237 env[1478]: time="2025-07-12T00:41:32.289100055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:41:32.316237 env[1478]: time="2025-07-12T00:41:32.289112535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:41:32.316237 env[1478]: time="2025-07-12T00:41:32.289124655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:41:32.316237 env[1478]: time="2025-07-12T00:41:32.289224615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:41:32.316237 env[1478]: time="2025-07-12T00:41:32.289293095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:41:32.316237 env[1478]: time="2025-07-12T00:41:32.289312655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:41:32.316237 env[1478]: time="2025-07-12T00:41:32.289327975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:41:32.316237 env[1478]: time="2025-07-12T00:41:32.289341015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:41:32.316237 env[1478]: time="2025-07-12T00:41:32.289366535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:41:32.316237 env[1478]: time="2025-07-12T00:41:32.289547175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:41:32.316237 env[1478]: time="2025-07-12T00:41:32.289567135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:41:32.316237 env[1478]: time="2025-07-12T00:41:32.289581055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:41:32.316557 env[1478]: time="2025-07-12T00:41:32.289606375Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:41:32.316557 env[1478]: time="2025-07-12T00:41:32.289623775Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 12 00:41:32.316557 env[1478]: time="2025-07-12T00:41:32.289637335Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:41:32.316557 env[1478]: time="2025-07-12T00:41:32.289654335Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 12 00:41:32.316557 env[1478]: time="2025-07-12T00:41:32.289698415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:41:32.316655 env[1478]: time="2025-07-12T00:41:32.289920815Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:41:32.316655 env[1478]: time="2025-07-12T00:41:32.289973335Z" level=info msg="Connect containerd service" Jul 12 00:41:32.316655 env[1478]: time="2025-07-12T00:41:32.290015375Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:41:32.316655 env[1478]: time="2025-07-12T00:41:32.291110135Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:41:32.316655 env[1478]: time="2025-07-12T00:41:32.291322376Z" level=info msg="Start subscribing containerd event" Jul 12 00:41:32.316655 env[1478]: time="2025-07-12T00:41:32.291401176Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:41:32.316655 env[1478]: time="2025-07-12T00:41:32.291452136Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:41:32.316655 env[1478]: time="2025-07-12T00:41:32.291630736Z" level=info msg="containerd successfully booted in 0.062756s" Jul 12 00:41:32.316655 env[1478]: time="2025-07-12T00:41:32.292134216Z" level=info msg="Start recovering state" Jul 12 00:41:32.316655 env[1478]: time="2025-07-12T00:41:32.292200016Z" level=info msg="Start event monitor" Jul 12 00:41:32.316655 env[1478]: time="2025-07-12T00:41:32.292216496Z" level=info msg="Start snapshots syncer" Jul 12 00:41:32.316655 env[1478]: time="2025-07-12T00:41:32.292229976Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:41:32.316655 env[1478]: time="2025-07-12T00:41:32.292237456Z" level=info msg="Start streaming server" Jul 12 00:41:32.321773 systemd-logind[1470]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 12 00:41:32.321977 systemd-logind[1470]: New seat seat0. Jul 12 00:41:32.394920 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:41:32.395087 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 12 00:41:32.422988 jq[1509]: true Jul 12 00:41:32.467510 extend-filesystems[1455]: Old size kept for /dev/sda9 Jul 12 00:41:32.479794 extend-filesystems[1455]: Found sr0 Jul 12 00:41:32.472656 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:41:32.472826 systemd[1]: Finished extend-filesystems.service. Jul 12 00:41:32.503948 tar[1476]: linux-arm64/LICENSE Jul 12 00:41:32.503948 tar[1476]: linux-arm64/helm Jul 12 00:41:32.636005 bash[1524]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:41:32.636935 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 12 00:41:32.661746 systemd[1]: nvidia.service: Deactivated successfully. Jul 12 00:41:32.716984 dbus-daemon[1453]: [system] SELinux support is enabled Jul 12 00:41:32.717181 systemd[1]: Started dbus.service. Jul 12 00:41:32.722750 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:41:32.723333 dbus-daemon[1453]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 12 00:41:32.722778 systemd[1]: Reached target system-config.target. Jul 12 00:41:32.730627 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:41:32.730653 systemd[1]: Reached target user-config.target. Jul 12 00:41:32.735426 systemd[1]: Started systemd-logind.service. Jul 12 00:41:33.203866 systemd[1]: Started kubelet.service. Jul 12 00:41:33.307090 tar[1476]: linux-arm64/README.md Jul 12 00:41:33.314352 systemd[1]: Finished prepare-helm.service. Jul 12 00:41:33.531978 update_engine[1472]: I0712 00:41:33.519105 1472 main.cc:92] Flatcar Update Engine starting Jul 12 00:41:33.574502 systemd[1]: Started update-engine.service. Jul 12 00:41:33.580840 update_engine[1472]: I0712 00:41:33.580801 1472 update_check_scheduler.cc:74] Next update check in 9m36s Jul 12 00:41:33.581728 systemd[1]: Started locksmithd.service. Jul 12 00:41:33.672685 kubelet[1560]: E0712 00:41:33.672612 1560 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:41:33.674642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:41:33.674777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:41:34.252701 sshd_keygen[1471]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:41:34.271010 systemd[1]: Finished sshd-keygen.service. Jul 12 00:41:34.277062 systemd[1]: Starting issuegen.service... Jul 12 00:41:34.282581 systemd[1]: Started waagent.service. Jul 12 00:41:34.288021 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:41:34.288205 systemd[1]: Finished issuegen.service. Jul 12 00:41:34.294108 systemd[1]: Starting systemd-user-sessions.service... Jul 12 00:41:34.328953 systemd[1]: Finished systemd-user-sessions.service. Jul 12 00:41:34.335677 systemd[1]: Started getty@tty1.service. Jul 12 00:41:34.342152 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 12 00:41:34.347224 systemd[1]: Reached target getty.target. Jul 12 00:41:34.351562 systemd[1]: Reached target multi-user.target. Jul 12 00:41:34.357568 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 12 00:41:34.370908 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 12 00:41:34.371122 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 12 00:41:34.380903 systemd[1]: Startup finished in 755ms (kernel) + 19.198s (initrd) + 38.001s (userspace) = 57.955s. Jul 12 00:41:35.541975 login[1582]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Jul 12 00:41:35.543728 login[1583]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 12 00:41:35.684121 systemd[1]: Created slice user-500.slice. Jul 12 00:41:35.686241 systemd[1]: Starting user-runtime-dir@500.service... Jul 12 00:41:35.689021 systemd-logind[1470]: New session 2 of user core. Jul 12 00:41:35.706986 systemd[1]: Finished user-runtime-dir@500.service. Jul 12 00:41:35.708426 systemd[1]: Starting user@500.service... Jul 12 00:41:35.734844 (systemd)[1588]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:41:35.989512 systemd[1588]: Queued start job for default target default.target. Jul 12 00:41:35.990031 systemd[1588]: Reached target paths.target. Jul 12 00:41:35.990051 systemd[1588]: Reached target sockets.target. Jul 12 00:41:35.990063 systemd[1588]: Reached target timers.target. Jul 12 00:41:35.990073 systemd[1588]: Reached target basic.target. Jul 12 00:41:35.990180 systemd[1]: Started user@500.service. Jul 12 00:41:35.991085 systemd[1]: Started session-2.scope. Jul 12 00:41:35.991116 systemd[1588]: Reached target default.target. Jul 12 00:41:35.991168 systemd[1588]: Startup finished in 250ms. Jul 12 00:41:36.054853 locksmithd[1566]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:41:36.542345 login[1582]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 12 00:41:36.547061 systemd-logind[1470]: New session 1 of user core. Jul 12 00:41:36.547900 systemd[1]: Started session-1.scope. Jul 12 00:41:43.758449 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:41:43.758619 systemd[1]: Stopped kubelet.service. Jul 12 00:41:43.759993 systemd[1]: Starting kubelet.service... Jul 12 00:41:43.857984 systemd[1]: Started kubelet.service. Jul 12 00:41:44.005000 kubelet[1614]: E0712 00:41:44.004959 1614 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:41:44.007991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:41:44.008113 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:41:44.295354 waagent[1579]: 2025-07-12T00:41:44.295216Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Jul 12 00:41:44.301799 waagent[1579]: 2025-07-12T00:41:44.301705Z INFO Daemon Daemon OS: flatcar 3510.3.7 Jul 12 00:41:44.306561 waagent[1579]: 2025-07-12T00:41:44.306487Z INFO Daemon Daemon Python: 3.9.16 Jul 12 00:41:44.311152 waagent[1579]: 2025-07-12T00:41:44.311033Z INFO Daemon Daemon Run daemon Jul 12 00:41:44.315425 waagent[1579]: 2025-07-12T00:41:44.315355Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.7' Jul 12 00:41:44.332459 waagent[1579]: 2025-07-12T00:41:44.332329Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jul 12 00:41:44.347390 waagent[1579]: 2025-07-12T00:41:44.347212Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 12 00:41:44.357121 waagent[1579]: 2025-07-12T00:41:44.357046Z INFO Daemon Daemon cloud-init is enabled: False Jul 12 00:41:44.362338 waagent[1579]: 2025-07-12T00:41:44.362243Z INFO Daemon Daemon Using waagent for provisioning Jul 12 00:41:44.368197 waagent[1579]: 2025-07-12T00:41:44.368133Z INFO Daemon Daemon Activate resource disk Jul 12 00:41:44.372936 waagent[1579]: 2025-07-12T00:41:44.372871Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 12 00:41:44.387103 waagent[1579]: 2025-07-12T00:41:44.387036Z INFO Daemon Daemon Found device: None Jul 12 00:41:44.391714 waagent[1579]: 2025-07-12T00:41:44.391648Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 12 00:41:44.399973 waagent[1579]: 2025-07-12T00:41:44.399906Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 12 00:41:44.411733 waagent[1579]: 2025-07-12T00:41:44.411663Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 12 00:41:44.418001 waagent[1579]: 2025-07-12T00:41:44.417937Z INFO Daemon Daemon Running default provisioning handler Jul 12 00:41:44.431928 waagent[1579]: 2025-07-12T00:41:44.431787Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jul 12 00:41:44.447814 waagent[1579]: 2025-07-12T00:41:44.447678Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 12 00:41:44.457755 waagent[1579]: 2025-07-12T00:41:44.457677Z INFO Daemon Daemon cloud-init is enabled: False Jul 12 00:41:44.462985 waagent[1579]: 2025-07-12T00:41:44.462919Z INFO Daemon Daemon Copying ovf-env.xml Jul 12 00:41:44.563043 waagent[1579]: 2025-07-12T00:41:44.562247Z INFO Daemon Daemon Successfully mounted dvd Jul 12 00:41:44.665442 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 12 00:41:44.764059 waagent[1579]: 2025-07-12T00:41:44.763904Z INFO Daemon Daemon Detect protocol endpoint Jul 12 00:41:44.769505 waagent[1579]: 2025-07-12T00:41:44.769414Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 12 00:41:44.776070 waagent[1579]: 2025-07-12T00:41:44.775988Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 12 00:41:44.783335 waagent[1579]: 2025-07-12T00:41:44.783228Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 12 00:41:44.789396 waagent[1579]: 2025-07-12T00:41:44.789320Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 12 00:41:44.794883 waagent[1579]: 2025-07-12T00:41:44.794813Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 12 00:41:44.888637 waagent[1579]: 2025-07-12T00:41:44.888508Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 12 00:41:44.896350 waagent[1579]: 2025-07-12T00:41:44.896298Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 12 00:41:44.902216 waagent[1579]: 2025-07-12T00:41:44.902148Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 12 00:41:45.460191 waagent[1579]: 2025-07-12T00:41:45.460032Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 12 00:41:45.476005 waagent[1579]: 2025-07-12T00:41:45.475918Z INFO Daemon Daemon Forcing an update of the goal state.. Jul 12 00:41:45.481986 waagent[1579]: 2025-07-12T00:41:45.481893Z INFO Daemon Daemon Fetching goal state [incarnation 1] Jul 12 00:41:45.573823 waagent[1579]: 2025-07-12T00:41:45.573681Z INFO Daemon Daemon Found private key matching thumbprint DDDFDAD23834FE79F2983FD1A14EFBE9C05D1097 Jul 12 00:41:45.582923 waagent[1579]: 2025-07-12T00:41:45.582819Z INFO Daemon Daemon Certificate with thumbprint 648F0F4314DC8E4CC17719876489CEB8CDFEDCC8 has no matching private key. Jul 12 00:41:45.592650 waagent[1579]: 2025-07-12T00:41:45.592552Z INFO Daemon Daemon Fetch goal state completed Jul 12 00:41:45.669099 waagent[1579]: 2025-07-12T00:41:45.669033Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: f7b368da-3dbf-4924-9e43-6c934f0d21c3 New eTag: 4762246721166056788] Jul 12 00:41:45.680030 waagent[1579]: 2025-07-12T00:41:45.679929Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Jul 12 00:41:45.734058 waagent[1579]: 2025-07-12T00:41:45.733922Z INFO Daemon Daemon Starting provisioning Jul 12 00:41:45.739419 waagent[1579]: 2025-07-12T00:41:45.739319Z INFO Daemon Daemon Handle ovf-env.xml. Jul 12 00:41:45.744211 waagent[1579]: 2025-07-12T00:41:45.744124Z INFO Daemon Daemon Set hostname [ci-3510.3.7-n-2c4241d00d] Jul 12 00:41:45.824360 waagent[1579]: 2025-07-12T00:41:45.824197Z INFO Daemon Daemon Publish hostname [ci-3510.3.7-n-2c4241d00d] Jul 12 00:41:45.831100 waagent[1579]: 2025-07-12T00:41:45.830996Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 12 00:41:45.837701 waagent[1579]: 2025-07-12T00:41:45.837614Z INFO Daemon Daemon Primary interface is [eth0] Jul 12 00:41:45.855312 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Jul 12 00:41:45.855488 systemd[1]: Stopped systemd-networkd-wait-online.service. Jul 12 00:41:45.855546 systemd[1]: Stopping systemd-networkd-wait-online.service... Jul 12 00:41:45.855816 systemd[1]: Stopping systemd-networkd.service... Jul 12 00:41:45.859365 systemd-networkd[1248]: eth0: DHCPv6 lease lost Jul 12 00:41:45.860663 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:41:45.860852 systemd[1]: Stopped systemd-networkd.service. Jul 12 00:41:45.863067 systemd[1]: Starting systemd-networkd.service... Jul 12 00:41:45.893416 systemd-networkd[1640]: enP19169s1: Link UP Jul 12 00:41:45.893429 systemd-networkd[1640]: enP19169s1: Gained carrier Jul 12 00:41:45.894713 systemd-networkd[1640]: eth0: Link UP Jul 12 00:41:45.894727 systemd-networkd[1640]: eth0: Gained carrier Jul 12 00:41:45.895080 systemd-networkd[1640]: lo: Link UP Jul 12 00:41:45.895091 systemd-networkd[1640]: lo: Gained carrier Jul 12 00:41:45.895375 systemd-networkd[1640]: eth0: Gained IPv6LL Jul 12 00:41:45.895606 systemd-networkd[1640]: Enumeration completed Jul 12 00:41:45.895849 systemd[1]: Started systemd-networkd.service. Jul 12 00:41:45.896251 systemd-networkd[1640]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:41:45.897773 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 12 00:41:45.901152 waagent[1579]: 2025-07-12T00:41:45.900976Z INFO Daemon Daemon Create user account if not exists Jul 12 00:41:45.907495 waagent[1579]: 2025-07-12T00:41:45.907398Z INFO Daemon Daemon User core already exists, skip useradd Jul 12 00:41:45.914751 waagent[1579]: 2025-07-12T00:41:45.914629Z INFO Daemon Daemon Configure sudoer Jul 12 00:41:45.920342 waagent[1579]: 2025-07-12T00:41:45.920212Z INFO Daemon Daemon Configure sshd Jul 12 00:41:45.921355 systemd-networkd[1640]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 12 00:41:45.925528 waagent[1579]: 2025-07-12T00:41:45.925428Z INFO Daemon Daemon Deploy ssh public key. Jul 12 00:41:45.932379 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 12 00:41:47.200705 waagent[1579]: 2025-07-12T00:41:47.200625Z INFO Daemon Daemon Provisioning complete Jul 12 00:41:47.221409 waagent[1579]: 2025-07-12T00:41:47.221341Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 12 00:41:47.227832 waagent[1579]: 2025-07-12T00:41:47.227745Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 12 00:41:47.238859 waagent[1579]: 2025-07-12T00:41:47.238771Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Jul 12 00:41:47.549319 waagent[1649]: 2025-07-12T00:41:47.549192Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Jul 12 00:41:47.550486 waagent[1649]: 2025-07-12T00:41:47.550425Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 12 00:41:47.550745 waagent[1649]: 2025-07-12T00:41:47.550694Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 12 00:41:47.565859 waagent[1649]: 2025-07-12T00:41:47.565111Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Jul 12 00:41:47.565859 waagent[1649]: 2025-07-12T00:41:47.565358Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Jul 12 00:41:47.641511 waagent[1649]: 2025-07-12T00:41:47.641354Z INFO ExtHandler ExtHandler Found private key matching thumbprint DDDFDAD23834FE79F2983FD1A14EFBE9C05D1097 Jul 12 00:41:47.641733 waagent[1649]: 2025-07-12T00:41:47.641678Z INFO ExtHandler ExtHandler Certificate with thumbprint 648F0F4314DC8E4CC17719876489CEB8CDFEDCC8 has no matching private key. Jul 12 00:41:47.641977 waagent[1649]: 2025-07-12T00:41:47.641927Z INFO ExtHandler ExtHandler Fetch goal state completed Jul 12 00:41:47.657761 waagent[1649]: 2025-07-12T00:41:47.657697Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 99592a7e-11f3-494a-abd8-5e96d865ed18 New eTag: 4762246721166056788] Jul 12 00:41:47.658375 waagent[1649]: 2025-07-12T00:41:47.658311Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Jul 12 00:41:47.770005 waagent[1649]: 2025-07-12T00:41:47.769786Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 12 00:41:47.781402 waagent[1649]: 2025-07-12T00:41:47.781310Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1649 Jul 12 00:41:47.785370 waagent[1649]: 2025-07-12T00:41:47.785295Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] Jul 12 00:41:47.786784 waagent[1649]: 2025-07-12T00:41:47.786721Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 12 00:41:47.998964 waagent[1649]: 2025-07-12T00:41:47.998835Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 12 00:41:47.999383 waagent[1649]: 2025-07-12T00:41:47.999319Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 12 00:41:48.007866 waagent[1649]: 2025-07-12T00:41:48.007797Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 12 00:41:48.008460 waagent[1649]: 2025-07-12T00:41:48.008395Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Jul 12 00:41:48.009701 waagent[1649]: 2025-07-12T00:41:48.009626Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Jul 12 00:41:48.011228 waagent[1649]: 2025-07-12T00:41:48.011144Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 12 00:41:48.011980 waagent[1649]: 2025-07-12T00:41:48.011914Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 12 00:41:48.012251 waagent[1649]: 2025-07-12T00:41:48.012195Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 12 00:41:48.013508 waagent[1649]: 2025-07-12T00:41:48.013431Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 12 00:41:48.013603 waagent[1649]: 2025-07-12T00:41:48.013530Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 12 00:41:48.014025 waagent[1649]: 2025-07-12T00:41:48.013844Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 12 00:41:48.014100 waagent[1649]: 2025-07-12T00:41:48.014040Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 12 00:41:48.014657 waagent[1649]: 2025-07-12T00:41:48.014589Z INFO EnvHandler ExtHandler Configure routes Jul 12 00:41:48.015140 waagent[1649]: 2025-07-12T00:41:48.015066Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 12 00:41:48.015140 waagent[1649]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 12 00:41:48.015140 waagent[1649]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 12 00:41:48.015140 waagent[1649]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 12 00:41:48.015140 waagent[1649]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 12 00:41:48.015140 waagent[1649]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 12 00:41:48.015140 waagent[1649]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 12 00:41:48.015356 waagent[1649]: 2025-07-12T00:41:48.015171Z INFO EnvHandler ExtHandler Gateway:None Jul 12 00:41:48.015382 waagent[1649]: 2025-07-12T00:41:48.015335Z INFO EnvHandler ExtHandler Routes:None Jul 12 00:41:48.019106 waagent[1649]: 2025-07-12T00:41:48.019005Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 12 00:41:48.019363 waagent[1649]: 2025-07-12T00:41:48.019238Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 12 00:41:48.020220 waagent[1649]: 2025-07-12T00:41:48.020133Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 12 00:41:48.020452 waagent[1649]: 2025-07-12T00:41:48.020363Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 12 00:41:48.020752 waagent[1649]: 2025-07-12T00:41:48.020683Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 12 00:41:48.033144 waagent[1649]: 2025-07-12T00:41:48.033051Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Jul 12 00:41:48.034091 waagent[1649]: 2025-07-12T00:41:48.034027Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Jul 12 00:41:48.035329 waagent[1649]: 2025-07-12T00:41:48.035232Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Jul 12 00:41:48.081565 waagent[1649]: 2025-07-12T00:41:48.081469Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1640' Jul 12 00:41:48.087842 waagent[1649]: 2025-07-12T00:41:48.087760Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Jul 12 00:41:48.239485 waagent[1649]: 2025-07-12T00:41:48.239358Z INFO MonitorHandler ExtHandler Network interfaces: Jul 12 00:41:48.239485 waagent[1649]: Executing ['ip', '-a', '-o', 'link']: Jul 12 00:41:48.239485 waagent[1649]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 12 00:41:48.239485 waagent[1649]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f7:84:73 brd ff:ff:ff:ff:ff:ff Jul 12 00:41:48.239485 waagent[1649]: 3: enP19169s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f7:84:73 brd ff:ff:ff:ff:ff:ff\ altname enP19169p0s2 Jul 12 00:41:48.239485 waagent[1649]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 12 00:41:48.239485 waagent[1649]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 12 00:41:48.239485 waagent[1649]: 2: eth0 inet 10.200.20.11/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 12 00:41:48.239485 waagent[1649]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 12 00:41:48.239485 waagent[1649]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jul 12 00:41:48.239485 waagent[1649]: 2: eth0 inet6 fe80::20d:3aff:fef7:8473/64 scope link \ valid_lft forever preferred_lft forever Jul 12 00:41:48.368483 waagent[1649]: 2025-07-12T00:41:48.368413Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.13.1.1 -- exiting Jul 12 00:41:49.243677 waagent[1579]: 2025-07-12T00:41:49.243541Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Jul 12 00:41:49.248361 waagent[1579]: 2025-07-12T00:41:49.248262Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.13.1.1 to be the latest agent Jul 12 00:41:50.541935 waagent[1686]: 2025-07-12T00:41:50.541834Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.13.1.1) Jul 12 00:41:50.543847 waagent[1686]: 2025-07-12T00:41:50.543779Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.7 Jul 12 00:41:50.544103 waagent[1686]: 2025-07-12T00:41:50.544055Z INFO ExtHandler ExtHandler Python: 3.9.16 Jul 12 00:41:50.544347 waagent[1686]: 2025-07-12T00:41:50.544296Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jul 12 00:41:50.559503 waagent[1686]: 2025-07-12T00:41:50.559370Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 12 00:41:50.560164 waagent[1686]: 2025-07-12T00:41:50.560100Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 12 00:41:50.560475 waagent[1686]: 2025-07-12T00:41:50.560421Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 12 00:41:50.560813 waagent[1686]: 2025-07-12T00:41:50.560757Z INFO ExtHandler ExtHandler Initializing the goal state... Jul 12 00:41:50.575227 waagent[1686]: 2025-07-12T00:41:50.575128Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 12 00:41:50.592972 waagent[1686]: 2025-07-12T00:41:50.592902Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 12 00:41:50.594332 waagent[1686]: 2025-07-12T00:41:50.594248Z INFO ExtHandler Jul 12 00:41:50.594601 waagent[1686]: 2025-07-12T00:41:50.594548Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 05d5aa03-4309-4b6e-809f-0d59f556212b eTag: 4762246721166056788 source: Fabric] Jul 12 00:41:50.595507 waagent[1686]: 2025-07-12T00:41:50.595445Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 12 00:41:50.596919 waagent[1686]: 2025-07-12T00:41:50.596855Z INFO ExtHandler Jul 12 00:41:50.597154 waagent[1686]: 2025-07-12T00:41:50.597104Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 12 00:41:50.605014 waagent[1686]: 2025-07-12T00:41:50.604953Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 12 00:41:50.605737 waagent[1686]: 2025-07-12T00:41:50.605685Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Jul 12 00:41:50.627482 waagent[1686]: 2025-07-12T00:41:50.627414Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Jul 12 00:41:50.709333 waagent[1686]: 2025-07-12T00:41:50.709146Z INFO ExtHandler Downloaded certificate {'thumbprint': 'DDDFDAD23834FE79F2983FD1A14EFBE9C05D1097', 'hasPrivateKey': True} Jul 12 00:41:50.710665 waagent[1686]: 2025-07-12T00:41:50.710597Z INFO ExtHandler Downloaded certificate {'thumbprint': '648F0F4314DC8E4CC17719876489CEB8CDFEDCC8', 'hasPrivateKey': False} Jul 12 00:41:50.711932 waagent[1686]: 2025-07-12T00:41:50.711866Z INFO ExtHandler Fetch goal state from WireServer completed Jul 12 00:41:50.712987 waagent[1686]: 2025-07-12T00:41:50.712923Z INFO ExtHandler ExtHandler Goal state initialization completed. Jul 12 00:41:50.734597 waagent[1686]: 2025-07-12T00:41:50.734447Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Jul 12 00:41:50.744168 waagent[1686]: 2025-07-12T00:41:50.744045Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Jul 12 00:41:50.748684 waagent[1686]: 2025-07-12T00:41:50.748557Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Jul 12 00:41:50.749083 waagent[1686]: 2025-07-12T00:41:50.749028Z INFO ExtHandler ExtHandler Checking state of the firewall Jul 12 00:41:51.101096 waagent[1686]: 2025-07-12T00:41:51.100963Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: Jul 12 00:41:51.101096 waagent[1686]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:41:51.101096 waagent[1686]: pkts bytes target prot opt in out source destination Jul 12 00:41:51.101096 waagent[1686]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:41:51.101096 waagent[1686]: pkts bytes target prot opt in out source destination Jul 12 00:41:51.101096 waagent[1686]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:41:51.101096 waagent[1686]: pkts bytes target prot opt in out source destination Jul 12 00:41:51.101096 waagent[1686]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 12 00:41:51.101096 waagent[1686]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 12 00:41:51.101096 waagent[1686]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 12 00:41:51.102777 waagent[1686]: 2025-07-12T00:41:51.102706Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Jul 12 00:41:51.106206 waagent[1686]: 2025-07-12T00:41:51.106084Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Jul 12 00:41:51.106657 waagent[1686]: 2025-07-12T00:41:51.106597Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 12 00:41:51.107133 waagent[1686]: 2025-07-12T00:41:51.107072Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 12 00:41:51.116263 waagent[1686]: 2025-07-12T00:41:51.116203Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 12 00:41:51.117025 waagent[1686]: 2025-07-12T00:41:51.116960Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Jul 12 00:41:51.125751 waagent[1686]: 2025-07-12T00:41:51.125666Z INFO ExtHandler ExtHandler WALinuxAgent-2.13.1.1 running as process 1686 Jul 12 00:41:51.129253 waagent[1686]: 2025-07-12T00:41:51.129170Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] Jul 12 00:41:51.130336 waagent[1686]: 2025-07-12T00:41:51.130245Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Jul 12 00:41:51.131415 waagent[1686]: 2025-07-12T00:41:51.131349Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 12 00:41:51.134496 waagent[1686]: 2025-07-12T00:41:51.134425Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 12 00:41:51.136086 waagent[1686]: 2025-07-12T00:41:51.136012Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 12 00:41:51.136514 waagent[1686]: 2025-07-12T00:41:51.136435Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 12 00:41:51.136994 waagent[1686]: 2025-07-12T00:41:51.136932Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 12 00:41:51.137594 waagent[1686]: 2025-07-12T00:41:51.137527Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 12 00:41:51.137921 waagent[1686]: 2025-07-12T00:41:51.137860Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 12 00:41:51.137921 waagent[1686]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 12 00:41:51.137921 waagent[1686]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 12 00:41:51.137921 waagent[1686]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 12 00:41:51.137921 waagent[1686]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 12 00:41:51.137921 waagent[1686]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 12 00:41:51.137921 waagent[1686]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 12 00:41:51.140473 waagent[1686]: 2025-07-12T00:41:51.140341Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 12 00:41:51.141180 waagent[1686]: 2025-07-12T00:41:51.141095Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 12 00:41:51.141822 waagent[1686]: 2025-07-12T00:41:51.141753Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 12 00:41:51.142484 waagent[1686]: 2025-07-12T00:41:51.142247Z INFO EnvHandler ExtHandler Configure routes Jul 12 00:41:51.142765 waagent[1686]: 2025-07-12T00:41:51.142673Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 12 00:41:51.142952 waagent[1686]: 2025-07-12T00:41:51.142884Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 12 00:41:51.146740 waagent[1686]: 2025-07-12T00:41:51.146613Z INFO EnvHandler ExtHandler Gateway:None Jul 12 00:41:51.150340 waagent[1686]: 2025-07-12T00:41:51.150149Z INFO EnvHandler ExtHandler Routes:None Jul 12 00:41:51.151593 waagent[1686]: 2025-07-12T00:41:51.151508Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 12 00:41:51.151871 waagent[1686]: 2025-07-12T00:41:51.151805Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 12 00:41:51.156608 waagent[1686]: 2025-07-12T00:41:51.156526Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 12 00:41:51.157441 waagent[1686]: 2025-07-12T00:41:51.157359Z INFO MonitorHandler ExtHandler Network interfaces: Jul 12 00:41:51.157441 waagent[1686]: Executing ['ip', '-a', '-o', 'link']: Jul 12 00:41:51.157441 waagent[1686]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 12 00:41:51.157441 waagent[1686]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f7:84:73 brd ff:ff:ff:ff:ff:ff Jul 12 00:41:51.157441 waagent[1686]: 3: enP19169s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f7:84:73 brd ff:ff:ff:ff:ff:ff\ altname enP19169p0s2 Jul 12 00:41:51.157441 waagent[1686]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 12 00:41:51.157441 waagent[1686]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 12 00:41:51.157441 waagent[1686]: 2: eth0 inet 10.200.20.11/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 12 00:41:51.157441 waagent[1686]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 12 00:41:51.157441 waagent[1686]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jul 12 00:41:51.157441 waagent[1686]: 2: eth0 inet6 fe80::20d:3aff:fef7:8473/64 scope link \ valid_lft forever preferred_lft forever Jul 12 00:41:51.175536 waagent[1686]: 2025-07-12T00:41:51.175436Z INFO ExtHandler ExtHandler Downloading agent manifest Jul 12 00:41:51.230068 waagent[1686]: 2025-07-12T00:41:51.229985Z INFO ExtHandler ExtHandler Jul 12 00:41:51.230226 waagent[1686]: 2025-07-12T00:41:51.230166Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a7dc1078-898b-4bf6-9a53-422cf052c0bd correlation 081ed29e-b789-4ee6-8c10-f1e92e567fe5 created: 2025-07-12T00:39:54.630096Z] Jul 12 00:41:51.231396 waagent[1686]: 2025-07-12T00:41:51.231329Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 12 00:41:51.233359 waagent[1686]: 2025-07-12T00:41:51.233296Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Jul 12 00:41:51.254498 waagent[1686]: 2025-07-12T00:41:51.254419Z INFO ExtHandler ExtHandler Looking for existing remote access users. Jul 12 00:41:51.256981 waagent[1686]: 2025-07-12T00:41:51.256913Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.13.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C19450FE-7B00-48DB-B48B-A3278BDAA329;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Jul 12 00:41:51.298974 waagent[1686]: 2025-07-12T00:41:51.298843Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Jul 12 00:41:51.320207 waagent[1686]: 2025-07-12T00:41:51.320140Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 12 00:41:54.258483 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:41:54.258653 systemd[1]: Stopped kubelet.service. Jul 12 00:41:54.260079 systemd[1]: Starting kubelet.service... Jul 12 00:41:54.537948 systemd[1]: Started kubelet.service. Jul 12 00:41:54.576479 kubelet[1733]: E0712 00:41:54.576419 1733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:41:54.578807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:41:54.578932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:42:04.050553 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 12 00:42:04.758523 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 12 00:42:04.758699 systemd[1]: Stopped kubelet.service. Jul 12 00:42:04.760151 systemd[1]: Starting kubelet.service... Jul 12 00:42:04.938034 systemd[1]: Started kubelet.service. Jul 12 00:42:04.983160 kubelet[1743]: E0712 00:42:04.983091 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:42:04.985398 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:42:04.985522 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:42:11.394486 systemd[1]: Created slice system-sshd.slice. Jul 12 00:42:11.396428 systemd[1]: Started sshd@0-10.200.20.11:22-10.200.16.10:48052.service. Jul 12 00:42:12.196483 sshd[1749]: Accepted publickey for core from 10.200.16.10 port 48052 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:42:12.240266 sshd[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:42:12.244109 systemd-logind[1470]: New session 3 of user core. Jul 12 00:42:12.244634 systemd[1]: Started session-3.scope. Jul 12 00:42:12.619925 systemd[1]: Started sshd@1-10.200.20.11:22-10.200.16.10:48068.service. Jul 12 00:42:13.068782 sshd[1754]: Accepted publickey for core from 10.200.16.10 port 48068 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:42:13.070046 sshd[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:42:13.073960 systemd-logind[1470]: New session 4 of user core. Jul 12 00:42:13.074393 systemd[1]: Started session-4.scope. Jul 12 00:42:13.409713 sshd[1754]: pam_unix(sshd:session): session closed for user core Jul 12 00:42:13.412855 systemd[1]: sshd@1-10.200.20.11:22-10.200.16.10:48068.service: Deactivated successfully. Jul 12 00:42:13.413602 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:42:13.414134 systemd-logind[1470]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:42:13.414947 systemd-logind[1470]: Removed session 4. Jul 12 00:42:13.484795 systemd[1]: Started sshd@2-10.200.20.11:22-10.200.16.10:48082.service. Jul 12 00:42:13.932042 sshd[1760]: Accepted publickey for core from 10.200.16.10 port 48082 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:42:13.933639 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:42:13.937908 systemd[1]: Started session-5.scope. Jul 12 00:42:13.938554 systemd-logind[1470]: New session 5 of user core. Jul 12 00:42:14.269381 sshd[1760]: pam_unix(sshd:session): session closed for user core Jul 12 00:42:14.272014 systemd[1]: sshd@2-10.200.20.11:22-10.200.16.10:48082.service: Deactivated successfully. Jul 12 00:42:14.272697 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:42:14.273188 systemd-logind[1470]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:42:14.274038 systemd-logind[1470]: Removed session 5. Jul 12 00:42:14.339338 systemd[1]: Started sshd@3-10.200.20.11:22-10.200.16.10:48094.service. Jul 12 00:42:14.772860 sshd[1766]: Accepted publickey for core from 10.200.16.10 port 48094 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:42:14.774121 sshd[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:42:14.778084 systemd-logind[1470]: New session 6 of user core. Jul 12 00:42:14.778490 systemd[1]: Started session-6.scope. Jul 12 00:42:15.008432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 12 00:42:15.008592 systemd[1]: Stopped kubelet.service. Jul 12 00:42:15.009976 systemd[1]: Starting kubelet.service... Jul 12 00:42:15.097070 sshd[1766]: pam_unix(sshd:session): session closed for user core Jul 12 00:42:15.099924 systemd[1]: sshd@3-10.200.20.11:22-10.200.16.10:48094.service: Deactivated successfully. Jul 12 00:42:15.100699 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:42:15.101205 systemd-logind[1470]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:42:15.102031 systemd-logind[1470]: Removed session 6. Jul 12 00:42:15.180944 systemd[1]: Started sshd@4-10.200.20.11:22-10.200.16.10:48100.service. Jul 12 00:42:15.331854 systemd[1]: Started kubelet.service. Jul 12 00:42:15.374257 kubelet[1778]: E0712 00:42:15.373934 1778 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:42:15.376186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:42:15.376339 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:42:15.669909 sshd[1774]: Accepted publickey for core from 10.200.16.10 port 48100 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:42:15.671167 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:42:15.675934 systemd[1]: Started session-7.scope. Jul 12 00:42:15.676347 systemd-logind[1470]: New session 7 of user core. Jul 12 00:42:16.548758 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:42:16.548974 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 12 00:42:16.569411 systemd[1]: Starting docker.service... Jul 12 00:42:16.601786 env[1798]: time="2025-07-12T00:42:16.601699691Z" level=info msg="Starting up" Jul 12 00:42:16.603474 env[1798]: time="2025-07-12T00:42:16.603446731Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 12 00:42:16.603577 env[1798]: time="2025-07-12T00:42:16.603563291Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 12 00:42:16.603664 env[1798]: time="2025-07-12T00:42:16.603648971Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 12 00:42:16.603723 env[1798]: time="2025-07-12T00:42:16.603710051Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 12 00:42:16.605388 env[1798]: time="2025-07-12T00:42:16.605363531Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 12 00:42:16.605489 env[1798]: time="2025-07-12T00:42:16.605474091Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 12 00:42:16.605546 env[1798]: time="2025-07-12T00:42:16.605532691Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 12 00:42:16.605596 env[1798]: time="2025-07-12T00:42:16.605584451Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 12 00:42:16.702914 env[1798]: time="2025-07-12T00:42:16.702867572Z" level=info msg="Loading containers: start." Jul 12 00:42:16.995304 kernel: Initializing XFRM netlink socket Jul 12 00:42:17.057111 env[1798]: time="2025-07-12T00:42:17.057072616Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 12 00:42:17.246736 systemd-networkd[1640]: docker0: Link UP Jul 12 00:42:17.278767 env[1798]: time="2025-07-12T00:42:17.278719859Z" level=info msg="Loading containers: done." Jul 12 00:42:17.288291 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1535135055-merged.mount: Deactivated successfully. Jul 12 00:42:17.310681 env[1798]: time="2025-07-12T00:42:17.310411099Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:42:17.310681 env[1798]: time="2025-07-12T00:42:17.310602979Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 12 00:42:17.310895 env[1798]: time="2025-07-12T00:42:17.310863179Z" level=info msg="Daemon has completed initialization" Jul 12 00:42:17.345652 systemd[1]: Started docker.service. Jul 12 00:42:17.351358 env[1798]: time="2025-07-12T00:42:17.351297620Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:42:18.428093 update_engine[1472]: I0712 00:42:18.427701 1472 update_attempter.cc:509] Updating boot flags... Jul 12 00:42:18.561148 env[1478]: time="2025-07-12T00:42:18.561103233Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 12 00:42:19.627627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount424019611.mount: Deactivated successfully. Jul 12 00:42:21.272238 env[1478]: time="2025-07-12T00:42:21.272181941Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:21.281939 env[1478]: time="2025-07-12T00:42:21.281886221Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:21.286438 env[1478]: time="2025-07-12T00:42:21.286402181Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:21.296670 env[1478]: time="2025-07-12T00:42:21.296626701Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:21.297649 env[1478]: time="2025-07-12T00:42:21.297624101Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 12 00:42:21.298812 env[1478]: time="2025-07-12T00:42:21.298787501Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 12 00:42:22.872887 env[1478]: time="2025-07-12T00:42:22.872828035Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:22.890083 env[1478]: time="2025-07-12T00:42:22.890027555Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:22.901353 env[1478]: time="2025-07-12T00:42:22.901323835Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:22.909515 env[1478]: time="2025-07-12T00:42:22.909470715Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:22.910320 env[1478]: time="2025-07-12T00:42:22.910263275Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 12 00:42:22.910988 env[1478]: time="2025-07-12T00:42:22.910965875Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 12 00:42:24.314074 env[1478]: time="2025-07-12T00:42:24.314026840Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:24.323680 env[1478]: time="2025-07-12T00:42:24.323637387Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:24.329376 env[1478]: time="2025-07-12T00:42:24.329330949Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:24.336825 env[1478]: time="2025-07-12T00:42:24.336779689Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:24.337819 env[1478]: time="2025-07-12T00:42:24.337787042Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 12 00:42:24.338394 env[1478]: time="2025-07-12T00:42:24.338361569Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 12 00:42:25.508519 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 12 00:42:25.508689 systemd[1]: Stopped kubelet.service. Jul 12 00:42:25.510074 systemd[1]: Starting kubelet.service... Jul 12 00:42:25.620021 systemd[1]: Started kubelet.service. Jul 12 00:42:25.735135 kubelet[1953]: E0712 00:42:25.735096 1953 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:42:25.737328 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:42:25.737451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:42:25.782981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1894988369.mount: Deactivated successfully. Jul 12 00:42:26.600231 env[1478]: time="2025-07-12T00:42:26.600178534Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:27.008843 env[1478]: time="2025-07-12T00:42:27.008792031Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:27.019861 env[1478]: time="2025-07-12T00:42:27.019814033Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:27.025573 env[1478]: time="2025-07-12T00:42:27.025518491Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:27.025936 env[1478]: time="2025-07-12T00:42:27.025900047Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 12 00:42:27.026938 env[1478]: time="2025-07-12T00:42:27.026892132Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:42:28.300297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1502432592.mount: Deactivated successfully. Jul 12 00:42:29.680414 env[1478]: time="2025-07-12T00:42:29.680353227Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:29.694248 env[1478]: time="2025-07-12T00:42:29.694205987Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:29.699787 env[1478]: time="2025-07-12T00:42:29.699752658Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:29.706136 env[1478]: time="2025-07-12T00:42:29.706104401Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:29.706980 env[1478]: time="2025-07-12T00:42:29.706942589Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:42:29.708336 env[1478]: time="2025-07-12T00:42:29.708313479Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:42:30.353393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2492461107.mount: Deactivated successfully. Jul 12 00:42:30.400382 env[1478]: time="2025-07-12T00:42:30.400335380Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:30.411452 env[1478]: time="2025-07-12T00:42:30.411412637Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:30.418411 env[1478]: time="2025-07-12T00:42:30.418381534Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:30.427655 env[1478]: time="2025-07-12T00:42:30.427610388Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:30.428784 env[1478]: time="2025-07-12T00:42:30.428250760Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:42:30.429292 env[1478]: time="2025-07-12T00:42:30.429250053Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 12 00:42:31.189825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1378848115.mount: Deactivated successfully. Jul 12 00:42:34.285643 env[1478]: time="2025-07-12T00:42:34.285581923Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:34.295658 env[1478]: time="2025-07-12T00:42:34.295604724Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:34.299743 env[1478]: time="2025-07-12T00:42:34.299697093Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:34.305787 env[1478]: time="2025-07-12T00:42:34.305751713Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:34.306611 env[1478]: time="2025-07-12T00:42:34.306582314Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 12 00:42:35.758470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 12 00:42:35.758641 systemd[1]: Stopped kubelet.service. Jul 12 00:42:35.760002 systemd[1]: Starting kubelet.service... Jul 12 00:42:35.941828 systemd[1]: Started kubelet.service. Jul 12 00:42:35.989400 kubelet[1978]: E0712 00:42:35.989353 1978 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:42:35.991170 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:42:35.991312 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:42:40.221676 systemd[1]: Stopped kubelet.service. Jul 12 00:42:40.223694 systemd[1]: Starting kubelet.service... Jul 12 00:42:40.268165 systemd[1]: Reloading. Jul 12 00:42:40.356680 /usr/lib/systemd/system-generators/torcx-generator[2014]: time="2025-07-12T00:42:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:42:40.357033 /usr/lib/systemd/system-generators/torcx-generator[2014]: time="2025-07-12T00:42:40Z" level=info msg="torcx already run" Jul 12 00:42:40.429011 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:42:40.429035 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:42:40.445041 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:42:40.537060 systemd[1]: Started kubelet.service. Jul 12 00:42:40.538780 systemd[1]: Stopping kubelet.service... Jul 12 00:42:40.539127 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:42:40.539341 systemd[1]: Stopped kubelet.service. Jul 12 00:42:40.540864 systemd[1]: Starting kubelet.service... Jul 12 00:42:40.709870 systemd[1]: Started kubelet.service. Jul 12 00:42:40.893710 kubelet[2079]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:42:40.893710 kubelet[2079]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:42:40.893710 kubelet[2079]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:42:40.893710 kubelet[2079]: I0712 00:42:40.893370 2079 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:42:41.357923 kubelet[2079]: I0712 00:42:41.357880 2079 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:42:41.357923 kubelet[2079]: I0712 00:42:41.357913 2079 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:42:41.358191 kubelet[2079]: I0712 00:42:41.358169 2079 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:42:41.385208 kubelet[2079]: E0712 00:42:41.385173 2079 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:42:41.386762 kubelet[2079]: I0712 00:42:41.386722 2079 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:42:41.394562 kubelet[2079]: E0712 00:42:41.394534 2079 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:42:41.394709 kubelet[2079]: I0712 00:42:41.394693 2079 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:42:41.397900 kubelet[2079]: I0712 00:42:41.397877 2079 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:42:41.398936 kubelet[2079]: I0712 00:42:41.398896 2079 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:42:41.399200 kubelet[2079]: I0712 00:42:41.399024 2079 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-2c4241d00d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:42:41.399386 kubelet[2079]: I0712 00:42:41.399369 2079 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:42:41.399461 kubelet[2079]: I0712 00:42:41.399451 2079 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:42:41.399654 kubelet[2079]: I0712 00:42:41.399637 2079 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:42:41.402620 kubelet[2079]: I0712 00:42:41.402594 2079 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:42:41.402761 kubelet[2079]: I0712 00:42:41.402745 2079 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:42:41.402869 kubelet[2079]: I0712 00:42:41.402856 2079 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:42:41.402937 kubelet[2079]: I0712 00:42:41.402926 2079 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:42:41.410059 kubelet[2079]: W0712 00:42:41.409861 2079 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-2c4241d00d&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 12 00:42:41.410059 kubelet[2079]: E0712 00:42:41.409926 2079 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-2c4241d00d&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:42:41.410288 kubelet[2079]: W0712 00:42:41.410242 2079 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 12 00:42:41.410345 kubelet[2079]: E0712 00:42:41.410300 2079 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:42:41.410378 kubelet[2079]: I0712 00:42:41.410368 2079 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 12 00:42:41.410837 kubelet[2079]: I0712 00:42:41.410809 2079 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:42:41.410886 kubelet[2079]: W0712 00:42:41.410868 2079 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:42:41.411658 kubelet[2079]: I0712 00:42:41.411630 2079 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:42:41.411727 kubelet[2079]: I0712 00:42:41.411667 2079 server.go:1287] "Started kubelet" Jul 12 00:42:41.418857 kubelet[2079]: E0712 00:42:41.418732 2079 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-2c4241d00d.18515a401b9f0c64 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-2c4241d00d,UID:ci-3510.3.7-n-2c4241d00d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-2c4241d00d,},FirstTimestamp:2025-07-12 00:42:41.411648612 +0000 UTC m=+0.697473014,LastTimestamp:2025-07-12 00:42:41.411648612 +0000 UTC m=+0.697473014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-2c4241d00d,}" Jul 12 00:42:41.420963 kubelet[2079]: E0712 00:42:41.420943 2079 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:42:41.421983 kubelet[2079]: I0712 00:42:41.421915 2079 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:42:41.422417 kubelet[2079]: I0712 00:42:41.422402 2079 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:42:41.422614 kubelet[2079]: I0712 00:42:41.422578 2079 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:42:41.423660 kubelet[2079]: I0712 00:42:41.423643 2079 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:42:41.425080 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 12 00:42:41.425290 kubelet[2079]: I0712 00:42:41.425248 2079 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:42:41.425769 kubelet[2079]: I0712 00:42:41.425753 2079 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:42:41.425995 kubelet[2079]: I0712 00:42:41.425975 2079 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:42:41.427480 kubelet[2079]: I0712 00:42:41.427462 2079 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:42:41.427629 kubelet[2079]: I0712 00:42:41.427618 2079 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:42:41.427915 kubelet[2079]: E0712 00:42:41.427878 2079 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2c4241d00d\" not found" Jul 12 00:42:41.428615 kubelet[2079]: E0712 00:42:41.428578 2079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-2c4241d00d?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="200ms" Jul 12 00:42:41.428693 kubelet[2079]: W0712 00:42:41.428659 2079 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 12 00:42:41.428738 kubelet[2079]: E0712 00:42:41.428698 2079 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:42:41.428858 kubelet[2079]: I0712 00:42:41.428841 2079 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:42:41.429250 kubelet[2079]: I0712 00:42:41.429228 2079 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:42:41.435533 kubelet[2079]: I0712 00:42:41.435499 2079 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:42:41.440206 kubelet[2079]: E0712 00:42:41.440094 2079 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-2c4241d00d.18515a401b9f0c64 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-2c4241d00d,UID:ci-3510.3.7-n-2c4241d00d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-2c4241d00d,},FirstTimestamp:2025-07-12 00:42:41.411648612 +0000 UTC m=+0.697473014,LastTimestamp:2025-07-12 00:42:41.411648612 +0000 UTC m=+0.697473014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-2c4241d00d,}" Jul 12 00:42:41.524250 kubelet[2079]: I0712 00:42:41.524213 2079 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:42:41.524420 kubelet[2079]: I0712 00:42:41.524406 2079 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:42:41.524490 kubelet[2079]: I0712 00:42:41.524479 2079 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:42:41.528441 kubelet[2079]: E0712 00:42:41.528409 2079 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2c4241d00d\" not found" Jul 12 00:42:41.529623 kubelet[2079]: I0712 00:42:41.529606 2079 policy_none.go:49] "None policy: Start" Jul 12 00:42:41.529704 kubelet[2079]: I0712 00:42:41.529693 2079 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:42:41.529757 kubelet[2079]: I0712 00:42:41.529748 2079 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:42:41.538637 systemd[1]: Created slice kubepods.slice. Jul 12 00:42:41.543035 systemd[1]: Created slice kubepods-burstable.slice. Jul 12 00:42:41.545825 systemd[1]: Created slice kubepods-besteffort.slice. Jul 12 00:42:41.553106 kubelet[2079]: I0712 00:42:41.553081 2079 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:42:41.553402 kubelet[2079]: I0712 00:42:41.553385 2079 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:42:41.553526 kubelet[2079]: I0712 00:42:41.553492 2079 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:42:41.555191 kubelet[2079]: I0712 00:42:41.555170 2079 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:42:41.557309 kubelet[2079]: I0712 00:42:41.557064 2079 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:42:41.557844 kubelet[2079]: E0712 00:42:41.557816 2079 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:42:41.557922 kubelet[2079]: E0712 00:42:41.557863 2079 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-n-2c4241d00d\" not found" Jul 12 00:42:41.558802 kubelet[2079]: I0712 00:42:41.558783 2079 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:42:41.559073 kubelet[2079]: I0712 00:42:41.559048 2079 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:42:41.559143 kubelet[2079]: I0712 00:42:41.559080 2079 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:42:41.559143 kubelet[2079]: I0712 00:42:41.559089 2079 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:42:41.559143 kubelet[2079]: E0712 00:42:41.559134 2079 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 12 00:42:41.560115 kubelet[2079]: W0712 00:42:41.560076 2079 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 12 00:42:41.560791 kubelet[2079]: E0712 00:42:41.560756 2079 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:42:41.629238 kubelet[2079]: E0712 00:42:41.629123 2079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-2c4241d00d?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="400ms" Jul 12 00:42:41.655635 kubelet[2079]: I0712 00:42:41.655603 2079 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:41.656078 kubelet[2079]: E0712 00:42:41.656050 2079 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:41.668050 systemd[1]: Created slice kubepods-burstable-pod9089ba86c2b5671cd8831609dcc2ebba.slice. Jul 12 00:42:41.686429 kubelet[2079]: E0712 00:42:41.686403 2079 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-2c4241d00d\" not found" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:41.688509 systemd[1]: Created slice kubepods-burstable-pod565eb315c5f6a07bb337c89a5b0a9ab5.slice. Jul 12 00:42:41.690251 kubelet[2079]: E0712 00:42:41.690234 2079 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-2c4241d00d\" not found" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:41.699461 systemd[1]: Created slice kubepods-burstable-pod7e69f0c661584f5000bd9335ca0a3cb1.slice. Jul 12 00:42:41.701192 kubelet[2079]: E0712 00:42:41.701175 2079 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-2c4241d00d\" not found" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:41.729239 kubelet[2079]: I0712 00:42:41.729209 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9089ba86c2b5671cd8831609dcc2ebba-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-2c4241d00d\" (UID: \"9089ba86c2b5671cd8831609dcc2ebba\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:41.729424 kubelet[2079]: I0712 00:42:41.729408 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9089ba86c2b5671cd8831609dcc2ebba-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-2c4241d00d\" (UID: \"9089ba86c2b5671cd8831609dcc2ebba\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:41.729530 kubelet[2079]: I0712 00:42:41.729511 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9089ba86c2b5671cd8831609dcc2ebba-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-2c4241d00d\" (UID: \"9089ba86c2b5671cd8831609dcc2ebba\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:41.729619 kubelet[2079]: I0712 00:42:41.729606 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/565eb315c5f6a07bb337c89a5b0a9ab5-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-2c4241d00d\" (UID: \"565eb315c5f6a07bb337c89a5b0a9ab5\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:41.729712 kubelet[2079]: I0712 00:42:41.729700 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/565eb315c5f6a07bb337c89a5b0a9ab5-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-2c4241d00d\" (UID: \"565eb315c5f6a07bb337c89a5b0a9ab5\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:41.729809 kubelet[2079]: I0712 00:42:41.729794 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/565eb315c5f6a07bb337c89a5b0a9ab5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-2c4241d00d\" (UID: \"565eb315c5f6a07bb337c89a5b0a9ab5\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:41.729905 kubelet[2079]: I0712 00:42:41.729892 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/565eb315c5f6a07bb337c89a5b0a9ab5-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-2c4241d00d\" (UID: \"565eb315c5f6a07bb337c89a5b0a9ab5\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:41.730001 kubelet[2079]: I0712 00:42:41.729987 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/565eb315c5f6a07bb337c89a5b0a9ab5-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-2c4241d00d\" (UID: \"565eb315c5f6a07bb337c89a5b0a9ab5\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:41.730099 kubelet[2079]: I0712 00:42:41.730087 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e69f0c661584f5000bd9335ca0a3cb1-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-2c4241d00d\" (UID: \"7e69f0c661584f5000bd9335ca0a3cb1\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:41.858083 kubelet[2079]: I0712 00:42:41.858056 2079 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:41.858665 kubelet[2079]: E0712 00:42:41.858639 2079 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:41.988793 env[1478]: time="2025-07-12T00:42:41.988224389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-2c4241d00d,Uid:9089ba86c2b5671cd8831609dcc2ebba,Namespace:kube-system,Attempt:0,}" Jul 12 00:42:41.991797 env[1478]: time="2025-07-12T00:42:41.991765388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-2c4241d00d,Uid:565eb315c5f6a07bb337c89a5b0a9ab5,Namespace:kube-system,Attempt:0,}" Jul 12 00:42:42.002808 env[1478]: time="2025-07-12T00:42:42.002760320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-2c4241d00d,Uid:7e69f0c661584f5000bd9335ca0a3cb1,Namespace:kube-system,Attempt:0,}" Jul 12 00:42:42.030732 kubelet[2079]: E0712 00:42:42.030691 2079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-2c4241d00d?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="800ms" Jul 12 00:42:42.232151 kubelet[2079]: W0712 00:42:42.232052 2079 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-2c4241d00d&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 12 00:42:42.232151 kubelet[2079]: E0712 00:42:42.232119 2079 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-2c4241d00d&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:42:42.260901 kubelet[2079]: I0712 00:42:42.260830 2079 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:42.261194 kubelet[2079]: E0712 00:42:42.261166 2079 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:42.477553 kubelet[2079]: W0712 00:42:42.477449 2079 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 12 00:42:42.477553 kubelet[2079]: E0712 00:42:42.477516 2079 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:42:42.638313 kubelet[2079]: W0712 00:42:42.638146 2079 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 12 00:42:42.638313 kubelet[2079]: E0712 00:42:42.638198 2079 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:42:42.718083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3329741640.mount: Deactivated successfully. Jul 12 00:42:42.780792 env[1478]: time="2025-07-12T00:42:42.780747053Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:42.783976 env[1478]: time="2025-07-12T00:42:42.783949245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:42.802609 env[1478]: time="2025-07-12T00:42:42.802559605Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:42.805950 env[1478]: time="2025-07-12T00:42:42.805920785Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:42.810948 env[1478]: time="2025-07-12T00:42:42.810917639Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:42.820776 env[1478]: time="2025-07-12T00:42:42.820740639Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:42.827166 env[1478]: time="2025-07-12T00:42:42.827122465Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:42.831524 kubelet[2079]: E0712 00:42:42.831486 2079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-2c4241d00d?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="1.6s" Jul 12 00:42:42.833226 env[1478]: time="2025-07-12T00:42:42.833196475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:42.834900 kubelet[2079]: W0712 00:42:42.834847 2079 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 12 00:42:42.834982 kubelet[2079]: E0712 00:42:42.834911 2079 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:42:42.838677 env[1478]: time="2025-07-12T00:42:42.838643574Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:42.845473 env[1478]: time="2025-07-12T00:42:42.845438568Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:42.859527 env[1478]: time="2025-07-12T00:42:42.859481162Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:42.884426 env[1478]: time="2025-07-12T00:42:42.884378276Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:42:43.063267 kubelet[2079]: I0712 00:42:43.063233 2079 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:43.063711 kubelet[2079]: E0712 00:42:43.063678 2079 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:43.483894 kubelet[2079]: E0712 00:42:43.483765 2079 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:42:43.923078 env[1478]: time="2025-07-12T00:42:43.922996032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:42:43.923391 env[1478]: time="2025-07-12T00:42:43.923083985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:42:43.923391 env[1478]: time="2025-07-12T00:42:43.923118782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:42:43.928330 env[1478]: time="2025-07-12T00:42:43.923358284Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6894e8007554b9c471cb0b2e3081e445148357b7aa12ab0a67d37fa70b0de18 pid=2119 runtime=io.containerd.runc.v2 Jul 12 00:42:43.947721 systemd[1]: Started cri-containerd-d6894e8007554b9c471cb0b2e3081e445148357b7aa12ab0a67d37fa70b0de18.scope. Jul 12 00:42:43.957652 env[1478]: time="2025-07-12T00:42:43.957511350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:42:43.957652 env[1478]: time="2025-07-12T00:42:43.957544467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:42:43.957652 env[1478]: time="2025-07-12T00:42:43.957553867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:42:43.957896 env[1478]: time="2025-07-12T00:42:43.957844445Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd9b1fe94d3a052712c3b402fd0578e95f23b297c609df73733af7654ac5805d pid=2163 runtime=io.containerd.runc.v2 Jul 12 00:42:43.959110 env[1478]: time="2025-07-12T00:42:43.959048594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:42:43.959195 env[1478]: time="2025-07-12T00:42:43.959122468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:42:43.959195 env[1478]: time="2025-07-12T00:42:43.959151186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:42:43.961199 env[1478]: time="2025-07-12T00:42:43.959401327Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/05ed621901d9377032b195b0cae389726c205e92b36f2ad2aa82eaa7babae893 pid=2162 runtime=io.containerd.runc.v2 Jul 12 00:42:43.973603 systemd[1]: Started cri-containerd-05ed621901d9377032b195b0cae389726c205e92b36f2ad2aa82eaa7babae893.scope. Jul 12 00:42:43.997442 systemd[1]: Started cri-containerd-dd9b1fe94d3a052712c3b402fd0578e95f23b297c609df73733af7654ac5805d.scope. Jul 12 00:42:44.006599 env[1478]: time="2025-07-12T00:42:44.006551303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-2c4241d00d,Uid:565eb315c5f6a07bb337c89a5b0a9ab5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6894e8007554b9c471cb0b2e3081e445148357b7aa12ab0a67d37fa70b0de18\"" Jul 12 00:42:44.012443 env[1478]: time="2025-07-12T00:42:44.012410513Z" level=info msg="CreateContainer within sandbox \"d6894e8007554b9c471cb0b2e3081e445148357b7aa12ab0a67d37fa70b0de18\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:42:44.025954 env[1478]: time="2025-07-12T00:42:44.025918040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-2c4241d00d,Uid:9089ba86c2b5671cd8831609dcc2ebba,Namespace:kube-system,Attempt:0,} returns sandbox id \"05ed621901d9377032b195b0cae389726c205e92b36f2ad2aa82eaa7babae893\"" Jul 12 00:42:44.033412 env[1478]: time="2025-07-12T00:42:44.033366053Z" level=info msg="CreateContainer within sandbox \"05ed621901d9377032b195b0cae389726c205e92b36f2ad2aa82eaa7babae893\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:42:44.041923 env[1478]: time="2025-07-12T00:42:44.041885907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-2c4241d00d,Uid:7e69f0c661584f5000bd9335ca0a3cb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd9b1fe94d3a052712c3b402fd0578e95f23b297c609df73733af7654ac5805d\"" Jul 12 00:42:44.046975 env[1478]: time="2025-07-12T00:42:44.046947456Z" level=info msg="CreateContainer within sandbox \"dd9b1fe94d3a052712c3b402fd0578e95f23b297c609df73733af7654ac5805d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:42:44.093946 env[1478]: time="2025-07-12T00:42:44.093889607Z" level=info msg="CreateContainer within sandbox \"d6894e8007554b9c471cb0b2e3081e445148357b7aa12ab0a67d37fa70b0de18\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3d742bc2c13dd5c7694644194c199d7dd357100d00c52d79f94d27db4fa86e4a\"" Jul 12 00:42:44.094636 env[1478]: time="2025-07-12T00:42:44.094595315Z" level=info msg="StartContainer for \"3d742bc2c13dd5c7694644194c199d7dd357100d00c52d79f94d27db4fa86e4a\"" Jul 12 00:42:44.108361 systemd[1]: Started cri-containerd-3d742bc2c13dd5c7694644194c199d7dd357100d00c52d79f94d27db4fa86e4a.scope. Jul 12 00:42:44.136400 env[1478]: time="2025-07-12T00:42:44.136359567Z" level=info msg="CreateContainer within sandbox \"05ed621901d9377032b195b0cae389726c205e92b36f2ad2aa82eaa7babae893\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"487c0842967ce4e15264791ba871ec086ab70d85e357c82dd3a1765567d210dd\"" Jul 12 00:42:44.136944 env[1478]: time="2025-07-12T00:42:44.136916366Z" level=info msg="StartContainer for \"487c0842967ce4e15264791ba871ec086ab70d85e357c82dd3a1765567d210dd\"" Jul 12 00:42:44.148912 env[1478]: time="2025-07-12T00:42:44.148860208Z" level=info msg="CreateContainer within sandbox \"dd9b1fe94d3a052712c3b402fd0578e95f23b297c609df73733af7654ac5805d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c89424adb85b7eba1c1137416018e344ba25bbedfe4b5e9e7c7ee39f0c4083d1\"" Jul 12 00:42:44.149733 env[1478]: time="2025-07-12T00:42:44.149690108Z" level=info msg="StartContainer for \"c89424adb85b7eba1c1137416018e344ba25bbedfe4b5e9e7c7ee39f0c4083d1\"" Jul 12 00:42:44.153223 env[1478]: time="2025-07-12T00:42:44.153182491Z" level=info msg="StartContainer for \"3d742bc2c13dd5c7694644194c199d7dd357100d00c52d79f94d27db4fa86e4a\" returns successfully" Jul 12 00:42:44.167856 systemd[1]: Started cri-containerd-487c0842967ce4e15264791ba871ec086ab70d85e357c82dd3a1765567d210dd.scope. Jul 12 00:42:44.174563 systemd[1]: Started cri-containerd-c89424adb85b7eba1c1137416018e344ba25bbedfe4b5e9e7c7ee39f0c4083d1.scope. Jul 12 00:42:44.222106 env[1478]: time="2025-07-12T00:42:44.222043072Z" level=info msg="StartContainer for \"c89424adb85b7eba1c1137416018e344ba25bbedfe4b5e9e7c7ee39f0c4083d1\" returns successfully" Jul 12 00:42:44.228766 env[1478]: time="2025-07-12T00:42:44.228712702Z" level=info msg="StartContainer for \"487c0842967ce4e15264791ba871ec086ab70d85e357c82dd3a1765567d210dd\" returns successfully" Jul 12 00:42:44.568221 kubelet[2079]: E0712 00:42:44.568182 2079 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-2c4241d00d\" not found" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:44.579972 kubelet[2079]: E0712 00:42:44.579939 2079 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-2c4241d00d\" not found" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:44.591202 kubelet[2079]: E0712 00:42:44.591171 2079 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-2c4241d00d\" not found" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:44.666136 kubelet[2079]: I0712 00:42:44.666098 2079 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:45.591139 kubelet[2079]: E0712 00:42:45.591101 2079 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-2c4241d00d\" not found" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:45.591524 kubelet[2079]: E0712 00:42:45.591431 2079 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-2c4241d00d\" not found" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:46.486754 kubelet[2079]: E0712 00:42:46.486688 2079 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-n-2c4241d00d\" not found" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:46.577857 kubelet[2079]: I0712 00:42:46.577804 2079 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:46.591149 kubelet[2079]: I0712 00:42:46.591120 2079 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:46.619602 kubelet[2079]: E0712 00:42:46.619555 2079 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.7-n-2c4241d00d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:46.629142 kubelet[2079]: I0712 00:42:46.629103 2079 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:46.632196 kubelet[2079]: E0712 00:42:46.632163 2079 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.7-n-2c4241d00d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:46.632305 kubelet[2079]: I0712 00:42:46.632203 2079 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:46.633974 kubelet[2079]: E0712 00:42:46.633951 2079 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.7-n-2c4241d00d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:46.634069 kubelet[2079]: I0712 00:42:46.634055 2079 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:46.635804 kubelet[2079]: E0712 00:42:46.635778 2079 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.7-n-2c4241d00d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:47.413204 kubelet[2079]: I0712 00:42:47.413172 2079 apiserver.go:52] "Watching apiserver" Jul 12 00:42:47.428198 kubelet[2079]: I0712 00:42:47.428155 2079 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:42:48.300259 kubelet[2079]: I0712 00:42:48.300222 2079 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:48.305749 kubelet[2079]: W0712 00:42:48.305710 2079 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 12 00:42:48.803762 systemd[1]: Reloading. Jul 12 00:42:48.883379 /usr/lib/systemd/system-generators/torcx-generator[2371]: time="2025-07-12T00:42:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:42:48.883410 /usr/lib/systemd/system-generators/torcx-generator[2371]: time="2025-07-12T00:42:48Z" level=info msg="torcx already run" Jul 12 00:42:48.971607 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:42:48.971783 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:42:48.987528 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:42:49.095430 systemd[1]: Stopping kubelet.service... Jul 12 00:42:49.118073 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:42:49.118300 systemd[1]: Stopped kubelet.service. Jul 12 00:42:49.119946 systemd[1]: Starting kubelet.service... Jul 12 00:42:49.430155 systemd[1]: Started kubelet.service. Jul 12 00:42:49.490295 kubelet[2435]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:42:49.490295 kubelet[2435]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:42:49.490295 kubelet[2435]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:42:49.490295 kubelet[2435]: I0712 00:42:49.489646 2435 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:42:49.498991 kubelet[2435]: I0712 00:42:49.498945 2435 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:42:49.499177 kubelet[2435]: I0712 00:42:49.499164 2435 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:42:49.499702 kubelet[2435]: I0712 00:42:49.499682 2435 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:42:49.501258 kubelet[2435]: I0712 00:42:49.501239 2435 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:42:49.505327 kubelet[2435]: I0712 00:42:49.505295 2435 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:42:49.511216 kubelet[2435]: E0712 00:42:49.511176 2435 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:42:49.511216 kubelet[2435]: I0712 00:42:49.511213 2435 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:42:49.514457 kubelet[2435]: I0712 00:42:49.514430 2435 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:42:49.514718 kubelet[2435]: I0712 00:42:49.514689 2435 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:42:49.514896 kubelet[2435]: I0712 00:42:49.514717 2435 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-2c4241d00d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:42:49.514983 kubelet[2435]: I0712 00:42:49.514902 2435 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:42:49.514983 kubelet[2435]: I0712 00:42:49.514912 2435 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:42:49.514983 kubelet[2435]: I0712 00:42:49.514971 2435 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:42:49.515118 kubelet[2435]: I0712 00:42:49.515100 2435 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:42:49.515153 kubelet[2435]: I0712 00:42:49.515119 2435 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:42:49.515153 kubelet[2435]: I0712 00:42:49.515135 2435 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:42:49.521362 kubelet[2435]: I0712 00:42:49.521332 2435 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:42:49.522251 kubelet[2435]: I0712 00:42:49.522233 2435 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 12 00:42:49.522900 kubelet[2435]: I0712 00:42:49.522875 2435 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:42:49.523489 kubelet[2435]: I0712 00:42:49.523475 2435 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:42:49.523641 kubelet[2435]: I0712 00:42:49.523630 2435 server.go:1287] "Started kubelet" Jul 12 00:42:49.525776 kubelet[2435]: I0712 00:42:49.525743 2435 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:42:49.532196 kubelet[2435]: I0712 00:42:49.532149 2435 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:42:49.534034 kubelet[2435]: I0712 00:42:49.534017 2435 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:42:49.536558 kubelet[2435]: I0712 00:42:49.536514 2435 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:42:49.536810 kubelet[2435]: I0712 00:42:49.536796 2435 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:42:49.537054 kubelet[2435]: I0712 00:42:49.537038 2435 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:42:49.538215 kubelet[2435]: I0712 00:42:49.538200 2435 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:42:49.538513 kubelet[2435]: E0712 00:42:49.538495 2435 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-2c4241d00d\" not found" Jul 12 00:42:49.540119 kubelet[2435]: I0712 00:42:49.540103 2435 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:42:49.540334 kubelet[2435]: I0712 00:42:49.540321 2435 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:42:49.545825 kubelet[2435]: I0712 00:42:49.545796 2435 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:42:49.546897 kubelet[2435]: I0712 00:42:49.546880 2435 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:42:49.547011 kubelet[2435]: I0712 00:42:49.547000 2435 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:42:49.547102 kubelet[2435]: I0712 00:42:49.547092 2435 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:42:49.547160 kubelet[2435]: I0712 00:42:49.547152 2435 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:42:49.547257 kubelet[2435]: E0712 00:42:49.547233 2435 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:42:49.565345 kubelet[2435]: I0712 00:42:49.565309 2435 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:42:49.569840 kubelet[2435]: I0712 00:42:49.569513 2435 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:42:49.569840 kubelet[2435]: I0712 00:42:49.569537 2435 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:42:49.576116 kubelet[2435]: E0712 00:42:49.576094 2435 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:42:49.629238 kubelet[2435]: I0712 00:42:49.629214 2435 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:42:49.629413 kubelet[2435]: I0712 00:42:49.629398 2435 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:42:49.629478 kubelet[2435]: I0712 00:42:49.629469 2435 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:42:49.629735 kubelet[2435]: I0712 00:42:49.629717 2435 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:42:49.629822 kubelet[2435]: I0712 00:42:49.629798 2435 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:42:49.629877 kubelet[2435]: I0712 00:42:49.629869 2435 policy_none.go:49] "None policy: Start" Jul 12 00:42:49.629930 kubelet[2435]: I0712 00:42:49.629922 2435 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:42:49.629988 kubelet[2435]: I0712 00:42:49.629980 2435 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:42:49.630156 kubelet[2435]: I0712 00:42:49.630144 2435 state_mem.go:75] "Updated machine memory state" Jul 12 00:42:49.633712 kubelet[2435]: I0712 00:42:49.633681 2435 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:42:49.633856 kubelet[2435]: I0712 00:42:49.633835 2435 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:42:49.633897 kubelet[2435]: I0712 00:42:49.633854 2435 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:42:49.634454 kubelet[2435]: I0712 00:42:49.634430 2435 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:42:49.637331 kubelet[2435]: E0712 00:42:49.637002 2435 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:42:49.648132 kubelet[2435]: I0712 00:42:49.648110 2435 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:49.648324 kubelet[2435]: I0712 00:42:49.648300 2435 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:49.648459 kubelet[2435]: I0712 00:42:49.648187 2435 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:49.662015 kubelet[2435]: W0712 00:42:49.661988 2435 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 12 00:42:49.663771 kubelet[2435]: W0712 00:42:49.663754 2435 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 12 00:42:49.663917 kubelet[2435]: E0712 00:42:49.663900 2435 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.7-n-2c4241d00d\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:49.664087 kubelet[2435]: W0712 00:42:49.664075 2435 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 12 00:42:49.736686 kubelet[2435]: I0712 00:42:49.736592 2435 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:49.740686 kubelet[2435]: I0712 00:42:49.740653 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e69f0c661584f5000bd9335ca0a3cb1-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-2c4241d00d\" (UID: \"7e69f0c661584f5000bd9335ca0a3cb1\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:49.740686 kubelet[2435]: I0712 00:42:49.740687 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9089ba86c2b5671cd8831609dcc2ebba-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-2c4241d00d\" (UID: \"9089ba86c2b5671cd8831609dcc2ebba\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:49.740801 kubelet[2435]: I0712 00:42:49.740704 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9089ba86c2b5671cd8831609dcc2ebba-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-2c4241d00d\" (UID: \"9089ba86c2b5671cd8831609dcc2ebba\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:49.740801 kubelet[2435]: I0712 00:42:49.740725 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/565eb315c5f6a07bb337c89a5b0a9ab5-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-2c4241d00d\" (UID: \"565eb315c5f6a07bb337c89a5b0a9ab5\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:49.740801 kubelet[2435]: I0712 00:42:49.740742 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/565eb315c5f6a07bb337c89a5b0a9ab5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-2c4241d00d\" (UID: \"565eb315c5f6a07bb337c89a5b0a9ab5\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:49.740801 kubelet[2435]: I0712 00:42:49.740757 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9089ba86c2b5671cd8831609dcc2ebba-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-2c4241d00d\" (UID: \"9089ba86c2b5671cd8831609dcc2ebba\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:49.740801 kubelet[2435]: I0712 00:42:49.740771 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/565eb315c5f6a07bb337c89a5b0a9ab5-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-2c4241d00d\" (UID: \"565eb315c5f6a07bb337c89a5b0a9ab5\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:49.740924 kubelet[2435]: I0712 00:42:49.740785 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/565eb315c5f6a07bb337c89a5b0a9ab5-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-2c4241d00d\" (UID: \"565eb315c5f6a07bb337c89a5b0a9ab5\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:49.740924 kubelet[2435]: I0712 00:42:49.740800 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/565eb315c5f6a07bb337c89a5b0a9ab5-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-2c4241d00d\" (UID: \"565eb315c5f6a07bb337c89a5b0a9ab5\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:49.750798 kubelet[2435]: I0712 00:42:49.750775 2435 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:49.750974 kubelet[2435]: I0712 00:42:49.750963 2435 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.7-n-2c4241d00d" Jul 12 00:42:49.937695 sudo[2466]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 12 00:42:49.937916 sudo[2466]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 12 00:42:50.414594 sudo[2466]: pam_unix(sudo:session): session closed for user root Jul 12 00:42:50.522221 kubelet[2435]: I0712 00:42:50.522184 2435 apiserver.go:52] "Watching apiserver" Jul 12 00:42:50.541127 kubelet[2435]: I0712 00:42:50.541089 2435 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:42:50.552422 kubelet[2435]: I0712 00:42:50.552362 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-n-2c4241d00d" podStartSLOduration=1.552330231 podStartE2EDuration="1.552330231s" podCreationTimestamp="2025-07-12 00:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:42:50.551032593 +0000 UTC m=+1.113217404" watchObservedRunningTime="2025-07-12 00:42:50.552330231 +0000 UTC m=+1.114515042" Jul 12 00:42:50.565802 kubelet[2435]: I0712 00:42:50.565752 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-2c4241d00d" podStartSLOduration=1.5657350239999999 podStartE2EDuration="1.565735024s" podCreationTimestamp="2025-07-12 00:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:42:50.565103424 +0000 UTC m=+1.127288275" watchObservedRunningTime="2025-07-12 00:42:50.565735024 +0000 UTC m=+1.127919835" Jul 12 00:42:50.593135 kubelet[2435]: I0712 00:42:50.593082 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-n-2c4241d00d" podStartSLOduration=2.593065218 podStartE2EDuration="2.593065218s" podCreationTimestamp="2025-07-12 00:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:42:50.579018865 +0000 UTC m=+1.141203676" watchObservedRunningTime="2025-07-12 00:42:50.593065218 +0000 UTC m=+1.155249989" Jul 12 00:42:52.435759 sudo[1785]: pam_unix(sudo:session): session closed for user root Jul 12 00:42:52.513656 sshd[1774]: pam_unix(sshd:session): session closed for user core Jul 12 00:42:52.516363 systemd[1]: sshd@4-10.200.20.11:22-10.200.16.10:48100.service: Deactivated successfully. Jul 12 00:42:52.517096 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:42:52.517244 systemd[1]: session-7.scope: Consumed 7.571s CPU time. Jul 12 00:42:52.517688 systemd-logind[1470]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:42:52.518710 systemd-logind[1470]: Removed session 7. Jul 12 00:42:55.000154 kubelet[2435]: I0712 00:42:55.000126 2435 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:42:55.001004 env[1478]: time="2025-07-12T00:42:55.000971803Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:42:55.001656 kubelet[2435]: I0712 00:42:55.001629 2435 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:42:55.941054 systemd[1]: Created slice kubepods-besteffort-podb54041d4_ae72_4ded_bf28_f7b39133499e.slice. Jul 12 00:42:55.955686 systemd[1]: Created slice kubepods-burstable-pod652fec4d_7a68_4a42_a474_fc9a7ab8cf73.slice. Jul 12 00:42:55.970089 kubelet[2435]: I0712 00:42:55.970038 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-clustermesh-secrets\") pod \"cilium-g9949\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " pod="kube-system/cilium-g9949" Jul 12 00:42:55.970089 kubelet[2435]: I0712 00:42:55.970082 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b54041d4-ae72-4ded-bf28-f7b39133499e-kube-proxy\") pod \"kube-proxy-mq42g\" (UID: \"b54041d4-ae72-4ded-bf28-f7b39133499e\") " pod="kube-system/kube-proxy-mq42g" Jul 12 00:42:55.970089 kubelet[2435]: I0712 00:42:55.970098 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b54041d4-ae72-4ded-bf28-f7b39133499e-lib-modules\") pod \"kube-proxy-mq42g\" (UID: \"b54041d4-ae72-4ded-bf28-f7b39133499e\") " pod="kube-system/kube-proxy-mq42g" Jul 12 00:42:55.970311 kubelet[2435]: I0712 00:42:55.970114 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lffjd\" (UniqueName: \"kubernetes.io/projected/b54041d4-ae72-4ded-bf28-f7b39133499e-kube-api-access-lffjd\") pod \"kube-proxy-mq42g\" (UID: \"b54041d4-ae72-4ded-bf28-f7b39133499e\") " pod="kube-system/kube-proxy-mq42g" Jul 12 00:42:55.970311 kubelet[2435]: I0712 00:42:55.970132 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-cni-path\") pod \"cilium-g9949\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " pod="kube-system/cilium-g9949" Jul 12 00:42:55.970311 kubelet[2435]: I0712 00:42:55.970155 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-etc-cni-netd\") pod \"cilium-g9949\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " pod="kube-system/cilium-g9949" Jul 12 00:42:55.970311 kubelet[2435]: I0712 00:42:55.970170 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-host-proc-sys-net\") pod \"cilium-g9949\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " pod="kube-system/cilium-g9949" Jul 12 00:42:55.970311 kubelet[2435]: I0712 00:42:55.970187 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-host-proc-sys-kernel\") pod \"cilium-g9949\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " pod="kube-system/cilium-g9949" Jul 12 00:42:55.970432 kubelet[2435]: I0712 00:42:55.970203 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-cilium-run\") pod \"cilium-g9949\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " pod="kube-system/cilium-g9949" Jul 12 00:42:55.970432 kubelet[2435]: I0712 00:42:55.970219 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-bpf-maps\") pod \"cilium-g9949\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " pod="kube-system/cilium-g9949" Jul 12 00:42:55.970432 kubelet[2435]: I0712 00:42:55.970234 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-hostproc\") pod \"cilium-g9949\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " pod="kube-system/cilium-g9949" Jul 12 00:42:55.970432 kubelet[2435]: I0712 00:42:55.970247 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-cilium-config-path\") pod \"cilium-g9949\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " pod="kube-system/cilium-g9949" Jul 12 00:42:55.970432 kubelet[2435]: I0712 00:42:55.970263 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm2cq\" (UniqueName: \"kubernetes.io/projected/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-kube-api-access-wm2cq\") pod \"cilium-g9949\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " pod="kube-system/cilium-g9949" Jul 12 00:42:55.970432 kubelet[2435]: I0712 00:42:55.970301 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-cilium-cgroup\") pod \"cilium-g9949\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " pod="kube-system/cilium-g9949" Jul 12 00:42:55.970571 kubelet[2435]: I0712 00:42:55.970316 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-xtables-lock\") pod \"cilium-g9949\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " pod="kube-system/cilium-g9949" Jul 12 00:42:55.970571 kubelet[2435]: I0712 00:42:55.970332 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-hubble-tls\") pod \"cilium-g9949\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " pod="kube-system/cilium-g9949" Jul 12 00:42:55.970571 kubelet[2435]: I0712 00:42:55.970347 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b54041d4-ae72-4ded-bf28-f7b39133499e-xtables-lock\") pod \"kube-proxy-mq42g\" (UID: \"b54041d4-ae72-4ded-bf28-f7b39133499e\") " pod="kube-system/kube-proxy-mq42g" Jul 12 00:42:55.970571 kubelet[2435]: I0712 00:42:55.970362 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-lib-modules\") pod \"cilium-g9949\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " pod="kube-system/cilium-g9949" Jul 12 00:42:56.071702 kubelet[2435]: I0712 00:42:56.071666 2435 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 12 00:42:56.109538 systemd[1]: Created slice kubepods-besteffort-pod1e23e555_b47e_4e91_b378_2e289e697af1.slice. Jul 12 00:42:56.171951 kubelet[2435]: I0712 00:42:56.171912 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e23e555-b47e-4e91-b378-2e289e697af1-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-2lsbf\" (UID: \"1e23e555-b47e-4e91-b378-2e289e697af1\") " pod="kube-system/cilium-operator-6c4d7847fc-2lsbf" Jul 12 00:42:56.172195 kubelet[2435]: I0712 00:42:56.172175 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-799dn\" (UniqueName: \"kubernetes.io/projected/1e23e555-b47e-4e91-b378-2e289e697af1-kube-api-access-799dn\") pod \"cilium-operator-6c4d7847fc-2lsbf\" (UID: \"1e23e555-b47e-4e91-b378-2e289e697af1\") " pod="kube-system/cilium-operator-6c4d7847fc-2lsbf" Jul 12 00:42:56.245915 env[1478]: time="2025-07-12T00:42:56.245811006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mq42g,Uid:b54041d4-ae72-4ded-bf28-f7b39133499e,Namespace:kube-system,Attempt:0,}" Jul 12 00:42:56.258978 env[1478]: time="2025-07-12T00:42:56.258930169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g9949,Uid:652fec4d-7a68-4a42-a474-fc9a7ab8cf73,Namespace:kube-system,Attempt:0,}" Jul 12 00:42:56.313864 env[1478]: time="2025-07-12T00:42:56.308833523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:42:56.313864 env[1478]: time="2025-07-12T00:42:56.308881680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:42:56.313864 env[1478]: time="2025-07-12T00:42:56.308893239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:42:56.313864 env[1478]: time="2025-07-12T00:42:56.309006753Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d91b3f194c67aa0ba2ce84cb7e26e9a3d9ec7495e0e86ee518c5b1ac23e475a0 pid=2516 runtime=io.containerd.runc.v2 Jul 12 00:42:56.317665 env[1478]: time="2025-07-12T00:42:56.317592764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:42:56.317764 env[1478]: time="2025-07-12T00:42:56.317679159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:42:56.317764 env[1478]: time="2025-07-12T00:42:56.317706598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:42:56.318232 env[1478]: time="2025-07-12T00:42:56.318110576Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c pid=2534 runtime=io.containerd.runc.v2 Jul 12 00:42:56.326024 systemd[1]: Started cri-containerd-d91b3f194c67aa0ba2ce84cb7e26e9a3d9ec7495e0e86ee518c5b1ac23e475a0.scope. Jul 12 00:42:56.339956 systemd[1]: Started cri-containerd-f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c.scope. Jul 12 00:42:56.361118 env[1478]: time="2025-07-12T00:42:56.360258753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mq42g,Uid:b54041d4-ae72-4ded-bf28-f7b39133499e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d91b3f194c67aa0ba2ce84cb7e26e9a3d9ec7495e0e86ee518c5b1ac23e475a0\"" Jul 12 00:42:56.363057 env[1478]: time="2025-07-12T00:42:56.363020082Z" level=info msg="CreateContainer within sandbox \"d91b3f194c67aa0ba2ce84cb7e26e9a3d9ec7495e0e86ee518c5b1ac23e475a0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:42:56.379460 env[1478]: time="2025-07-12T00:42:56.379408867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g9949,Uid:652fec4d-7a68-4a42-a474-fc9a7ab8cf73,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c\"" Jul 12 00:42:56.380946 env[1478]: time="2025-07-12T00:42:56.380919905Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 12 00:42:56.418718 env[1478]: time="2025-07-12T00:42:56.418672442Z" level=info msg="CreateContainer within sandbox \"d91b3f194c67aa0ba2ce84cb7e26e9a3d9ec7495e0e86ee518c5b1ac23e475a0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3a9ca1e5449ed7ea492b844642dbbb1dd6ea1c41e4b858358866c67055249865\"" Jul 12 00:42:56.420706 env[1478]: time="2025-07-12T00:42:56.420355670Z" level=info msg="StartContainer for \"3a9ca1e5449ed7ea492b844642dbbb1dd6ea1c41e4b858358866c67055249865\"" Jul 12 00:42:56.422306 env[1478]: time="2025-07-12T00:42:56.422259926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2lsbf,Uid:1e23e555-b47e-4e91-b378-2e289e697af1,Namespace:kube-system,Attempt:0,}" Jul 12 00:42:56.436710 systemd[1]: Started cri-containerd-3a9ca1e5449ed7ea492b844642dbbb1dd6ea1c41e4b858358866c67055249865.scope. Jul 12 00:42:56.471317 env[1478]: time="2025-07-12T00:42:56.471246050Z" level=info msg="StartContainer for \"3a9ca1e5449ed7ea492b844642dbbb1dd6ea1c41e4b858358866c67055249865\" returns successfully" Jul 12 00:42:56.478731 env[1478]: time="2025-07-12T00:42:56.478638086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:42:56.478841 env[1478]: time="2025-07-12T00:42:56.478735521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:42:56.478841 env[1478]: time="2025-07-12T00:42:56.478775119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:42:56.478962 env[1478]: time="2025-07-12T00:42:56.478923950Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/88fb28c2fcefac2eb1dd4b54068d38be9eba1267d7e3266363864a77d1f32d97 pid=2629 runtime=io.containerd.runc.v2 Jul 12 00:42:56.491168 systemd[1]: Started cri-containerd-88fb28c2fcefac2eb1dd4b54068d38be9eba1267d7e3266363864a77d1f32d97.scope. Jul 12 00:42:56.523738 env[1478]: time="2025-07-12T00:42:56.523694105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2lsbf,Uid:1e23e555-b47e-4e91-b378-2e289e697af1,Namespace:kube-system,Attempt:0,} returns sandbox id \"88fb28c2fcefac2eb1dd4b54068d38be9eba1267d7e3266363864a77d1f32d97\"" Jul 12 00:42:56.847132 kubelet[2435]: I0712 00:42:56.846692 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mq42g" podStartSLOduration=1.84667394 podStartE2EDuration="1.84667394s" podCreationTimestamp="2025-07-12 00:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:42:56.631169073 +0000 UTC m=+7.193353884" watchObservedRunningTime="2025-07-12 00:42:56.84667394 +0000 UTC m=+7.408858751" Jul 12 00:43:01.477684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2589392049.mount: Deactivated successfully. Jul 12 00:43:03.802917 env[1478]: time="2025-07-12T00:43:03.802845348Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:43:03.824037 env[1478]: time="2025-07-12T00:43:03.823996164Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:43:03.834468 env[1478]: time="2025-07-12T00:43:03.834419639Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:43:03.834902 env[1478]: time="2025-07-12T00:43:03.834873138Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 12 00:43:03.837860 env[1478]: time="2025-07-12T00:43:03.837824760Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 12 00:43:03.838930 env[1478]: time="2025-07-12T00:43:03.838624123Z" level=info msg="CreateContainer within sandbox \"f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:43:03.879970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1271332829.mount: Deactivated successfully. Jul 12 00:43:03.886404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2323239037.mount: Deactivated successfully. Jul 12 00:43:03.902870 env[1478]: time="2025-07-12T00:43:03.902823495Z" level=info msg="CreateContainer within sandbox \"f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e\"" Jul 12 00:43:03.904487 env[1478]: time="2025-07-12T00:43:03.904455979Z" level=info msg="StartContainer for \"b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e\"" Jul 12 00:43:03.922645 systemd[1]: Started cri-containerd-b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e.scope. Jul 12 00:43:03.954614 env[1478]: time="2025-07-12T00:43:03.954570127Z" level=info msg="StartContainer for \"b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e\" returns successfully" Jul 12 00:43:03.957088 systemd[1]: cri-containerd-b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e.scope: Deactivated successfully. Jul 12 00:43:04.877519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e-rootfs.mount: Deactivated successfully. Jul 12 00:43:05.528903 env[1478]: time="2025-07-12T00:43:05.528838734Z" level=info msg="shim disconnected" id=b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e Jul 12 00:43:05.528903 env[1478]: time="2025-07-12T00:43:05.528900452Z" level=warning msg="cleaning up after shim disconnected" id=b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e namespace=k8s.io Jul 12 00:43:05.528903 env[1478]: time="2025-07-12T00:43:05.528910411Z" level=info msg="cleaning up dead shim" Jul 12 00:43:05.542179 env[1478]: time="2025-07-12T00:43:05.542116903Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:43:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2841 runtime=io.containerd.runc.v2\n" Jul 12 00:43:05.639406 env[1478]: time="2025-07-12T00:43:05.639360492Z" level=info msg="CreateContainer within sandbox \"f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:43:05.685212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659710127.mount: Deactivated successfully. Jul 12 00:43:05.693155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4112245556.mount: Deactivated successfully. Jul 12 00:43:05.708965 env[1478]: time="2025-07-12T00:43:05.708907594Z" level=info msg="CreateContainer within sandbox \"f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794\"" Jul 12 00:43:05.709608 env[1478]: time="2025-07-12T00:43:05.709575085Z" level=info msg="StartContainer for \"7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794\"" Jul 12 00:43:05.728035 systemd[1]: Started cri-containerd-7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794.scope. Jul 12 00:43:05.755172 env[1478]: time="2025-07-12T00:43:05.755120536Z" level=info msg="StartContainer for \"7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794\" returns successfully" Jul 12 00:43:05.763449 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:43:05.763648 systemd[1]: Stopped systemd-sysctl.service. Jul 12 00:43:05.764210 systemd[1]: Stopping systemd-sysctl.service... Jul 12 00:43:05.766873 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:43:05.771758 systemd[1]: cri-containerd-7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794.scope: Deactivated successfully. Jul 12 00:43:05.776242 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:43:05.809423 env[1478]: time="2025-07-12T00:43:05.809305883Z" level=info msg="shim disconnected" id=7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794 Jul 12 00:43:05.809423 env[1478]: time="2025-07-12T00:43:05.809353161Z" level=warning msg="cleaning up after shim disconnected" id=7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794 namespace=k8s.io Jul 12 00:43:05.809423 env[1478]: time="2025-07-12T00:43:05.809363560Z" level=info msg="cleaning up dead shim" Jul 12 00:43:05.818706 env[1478]: time="2025-07-12T00:43:05.818650267Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:43:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2903 runtime=io.containerd.runc.v2\n" Jul 12 00:43:06.643027 env[1478]: time="2025-07-12T00:43:06.642959445Z" level=info msg="CreateContainer within sandbox \"f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:43:06.697984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount554601711.mount: Deactivated successfully. Jul 12 00:43:06.705917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount461646932.mount: Deactivated successfully. Jul 12 00:43:06.789253 env[1478]: time="2025-07-12T00:43:06.789185552Z" level=info msg="CreateContainer within sandbox \"f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57\"" Jul 12 00:43:06.790266 env[1478]: time="2025-07-12T00:43:06.790225106Z" level=info msg="StartContainer for \"70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57\"" Jul 12 00:43:06.808378 systemd[1]: Started cri-containerd-70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57.scope. Jul 12 00:43:06.854235 systemd[1]: cri-containerd-70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57.scope: Deactivated successfully. Jul 12 00:43:06.859969 env[1478]: time="2025-07-12T00:43:06.859857631Z" level=info msg="StartContainer for \"70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57\" returns successfully" Jul 12 00:43:06.887338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57-rootfs.mount: Deactivated successfully. Jul 12 00:43:06.922654 env[1478]: time="2025-07-12T00:43:06.922538180Z" level=info msg="shim disconnected" id=70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57 Jul 12 00:43:06.922654 env[1478]: time="2025-07-12T00:43:06.922590777Z" level=warning msg="cleaning up after shim disconnected" id=70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57 namespace=k8s.io Jul 12 00:43:06.922654 env[1478]: time="2025-07-12T00:43:06.922601377Z" level=info msg="cleaning up dead shim" Jul 12 00:43:06.931119 env[1478]: time="2025-07-12T00:43:06.931075728Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:43:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2964 runtime=io.containerd.runc.v2\n" Jul 12 00:43:07.446953 env[1478]: time="2025-07-12T00:43:07.446906297Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:43:07.455140 env[1478]: time="2025-07-12T00:43:07.455100628Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:43:07.462458 env[1478]: time="2025-07-12T00:43:07.462418635Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:43:07.462990 env[1478]: time="2025-07-12T00:43:07.462958812Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 12 00:43:07.467092 env[1478]: time="2025-07-12T00:43:07.467049798Z" level=info msg="CreateContainer within sandbox \"88fb28c2fcefac2eb1dd4b54068d38be9eba1267d7e3266363864a77d1f32d97\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 12 00:43:07.501710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2127735679.mount: Deactivated successfully. Jul 12 00:43:07.524391 env[1478]: time="2025-07-12T00:43:07.524319675Z" level=info msg="CreateContainer within sandbox \"88fb28c2fcefac2eb1dd4b54068d38be9eba1267d7e3266363864a77d1f32d97\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2\"" Jul 12 00:43:07.525163 env[1478]: time="2025-07-12T00:43:07.525136120Z" level=info msg="StartContainer for \"ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2\"" Jul 12 00:43:07.549584 systemd[1]: Started cri-containerd-ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2.scope. Jul 12 00:43:07.584599 env[1478]: time="2025-07-12T00:43:07.584547385Z" level=info msg="StartContainer for \"ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2\" returns successfully" Jul 12 00:43:07.648248 env[1478]: time="2025-07-12T00:43:07.648192910Z" level=info msg="CreateContainer within sandbox \"f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:43:07.691484 kubelet[2435]: I0712 00:43:07.691408 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-2lsbf" podStartSLOduration=0.753106215 podStartE2EDuration="11.691386828s" podCreationTimestamp="2025-07-12 00:42:56 +0000 UTC" firstStartedPulling="2025-07-12 00:42:56.525647958 +0000 UTC m=+7.087832769" lastFinishedPulling="2025-07-12 00:43:07.463928611 +0000 UTC m=+18.026113382" observedRunningTime="2025-07-12 00:43:07.669044101 +0000 UTC m=+18.231228872" watchObservedRunningTime="2025-07-12 00:43:07.691386828 +0000 UTC m=+18.253571719" Jul 12 00:43:07.701011 env[1478]: time="2025-07-12T00:43:07.700893502Z" level=info msg="CreateContainer within sandbox \"f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b\"" Jul 12 00:43:07.702343 env[1478]: time="2025-07-12T00:43:07.702304002Z" level=info msg="StartContainer for \"16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b\"" Jul 12 00:43:07.724887 systemd[1]: Started cri-containerd-16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b.scope. Jul 12 00:43:07.822533 systemd[1]: cri-containerd-16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b.scope: Deactivated successfully. Jul 12 00:43:07.829466 env[1478]: time="2025-07-12T00:43:07.829416459Z" level=info msg="StartContainer for \"16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b\" returns successfully" Jul 12 00:43:07.878047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3163739242.mount: Deactivated successfully. Jul 12 00:43:08.159964 env[1478]: time="2025-07-12T00:43:08.159915542Z" level=info msg="shim disconnected" id=16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b Jul 12 00:43:08.160242 env[1478]: time="2025-07-12T00:43:08.160223009Z" level=warning msg="cleaning up after shim disconnected" id=16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b namespace=k8s.io Jul 12 00:43:08.160367 env[1478]: time="2025-07-12T00:43:08.160351204Z" level=info msg="cleaning up dead shim" Jul 12 00:43:08.171804 env[1478]: time="2025-07-12T00:43:08.171748608Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:43:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3059 runtime=io.containerd.runc.v2\n" Jul 12 00:43:08.653901 env[1478]: time="2025-07-12T00:43:08.653205260Z" level=info msg="CreateContainer within sandbox \"f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:43:08.693571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288368570.mount: Deactivated successfully. Jul 12 00:43:08.703225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4021410037.mount: Deactivated successfully. Jul 12 00:43:08.718998 env[1478]: time="2025-07-12T00:43:08.718911276Z" level=info msg="CreateContainer within sandbox \"f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d\"" Jul 12 00:43:08.721638 env[1478]: time="2025-07-12T00:43:08.719703683Z" level=info msg="StartContainer for \"58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d\"" Jul 12 00:43:08.739681 systemd[1]: Started cri-containerd-58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d.scope. Jul 12 00:43:08.772153 env[1478]: time="2025-07-12T00:43:08.772098574Z" level=info msg="StartContainer for \"58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d\" returns successfully" Jul 12 00:43:08.871309 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 12 00:43:08.907598 kubelet[2435]: I0712 00:43:08.906651 2435 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 00:43:08.952309 systemd[1]: Created slice kubepods-burstable-podf2147cb5_175d_4bd9_950e_d30c06cce5da.slice. Jul 12 00:43:08.960352 systemd[1]: Created slice kubepods-burstable-podde677763_5ee6_4bf1_8580_dbf5ee8ce3ef.slice. Jul 12 00:43:09.050249 kubelet[2435]: I0712 00:43:09.050208 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2147cb5-175d-4bd9-950e-d30c06cce5da-config-volume\") pod \"coredns-668d6bf9bc-btg4g\" (UID: \"f2147cb5-175d-4bd9-950e-d30c06cce5da\") " pod="kube-system/coredns-668d6bf9bc-btg4g" Jul 12 00:43:09.050530 kubelet[2435]: I0712 00:43:09.050508 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td24n\" (UniqueName: \"kubernetes.io/projected/f2147cb5-175d-4bd9-950e-d30c06cce5da-kube-api-access-td24n\") pod \"coredns-668d6bf9bc-btg4g\" (UID: \"f2147cb5-175d-4bd9-950e-d30c06cce5da\") " pod="kube-system/coredns-668d6bf9bc-btg4g" Jul 12 00:43:09.151617 kubelet[2435]: I0712 00:43:09.151566 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de677763-5ee6-4bf1-8580-dbf5ee8ce3ef-config-volume\") pod \"coredns-668d6bf9bc-w22ml\" (UID: \"de677763-5ee6-4bf1-8580-dbf5ee8ce3ef\") " pod="kube-system/coredns-668d6bf9bc-w22ml" Jul 12 00:43:09.151781 kubelet[2435]: I0712 00:43:09.151650 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrv2c\" (UniqueName: \"kubernetes.io/projected/de677763-5ee6-4bf1-8580-dbf5ee8ce3ef-kube-api-access-hrv2c\") pod \"coredns-668d6bf9bc-w22ml\" (UID: \"de677763-5ee6-4bf1-8580-dbf5ee8ce3ef\") " pod="kube-system/coredns-668d6bf9bc-w22ml" Jul 12 00:43:09.256000 env[1478]: time="2025-07-12T00:43:09.255495326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-btg4g,Uid:f2147cb5-175d-4bd9-950e-d30c06cce5da,Namespace:kube-system,Attempt:0,}" Jul 12 00:43:09.563820 env[1478]: time="2025-07-12T00:43:09.563771118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w22ml,Uid:de677763-5ee6-4bf1-8580-dbf5ee8ce3ef,Namespace:kube-system,Attempt:0,}" Jul 12 00:43:09.610309 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 12 00:43:11.318405 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 12 00:43:11.318556 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 12 00:43:11.313140 systemd-networkd[1640]: cilium_host: Link UP Jul 12 00:43:11.326478 systemd-networkd[1640]: cilium_net: Link UP Jul 12 00:43:11.327136 systemd-networkd[1640]: cilium_net: Gained carrier Jul 12 00:43:11.328294 systemd-networkd[1640]: cilium_host: Gained carrier Jul 12 00:43:11.329101 systemd-networkd[1640]: cilium_net: Gained IPv6LL Jul 12 00:43:11.375430 systemd-networkd[1640]: cilium_host: Gained IPv6LL Jul 12 00:43:11.571500 systemd-networkd[1640]: cilium_vxlan: Link UP Jul 12 00:43:11.571507 systemd-networkd[1640]: cilium_vxlan: Gained carrier Jul 12 00:43:11.862315 kernel: NET: Registered PF_ALG protocol family Jul 12 00:43:12.818367 systemd-networkd[1640]: lxc_health: Link UP Jul 12 00:43:12.836204 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 12 00:43:12.836574 systemd-networkd[1640]: lxc_health: Gained carrier Jul 12 00:43:13.064437 systemd-networkd[1640]: cilium_vxlan: Gained IPv6LL Jul 12 00:43:13.155062 systemd-networkd[1640]: lxca5f298cf10b5: Link UP Jul 12 00:43:13.167413 kernel: eth0: renamed from tmp4c077 Jul 12 00:43:13.185329 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca5f298cf10b5: link becomes ready Jul 12 00:43:13.187089 systemd-networkd[1640]: lxca5f298cf10b5: Gained carrier Jul 12 00:43:13.336012 systemd-networkd[1640]: lxcdce01ae86403: Link UP Jul 12 00:43:13.344304 kernel: eth0: renamed from tmp9d9db Jul 12 00:43:13.357300 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdce01ae86403: link becomes ready Jul 12 00:43:13.357321 systemd-networkd[1640]: lxcdce01ae86403: Gained carrier Jul 12 00:43:14.286916 kubelet[2435]: I0712 00:43:14.286846 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g9949" podStartSLOduration=11.831051556 podStartE2EDuration="19.28682326s" podCreationTimestamp="2025-07-12 00:42:55 +0000 UTC" firstStartedPulling="2025-07-12 00:42:56.38046313 +0000 UTC m=+6.942647901" lastFinishedPulling="2025-07-12 00:43:03.836234834 +0000 UTC m=+14.398419605" observedRunningTime="2025-07-12 00:43:09.694024591 +0000 UTC m=+20.256209482" watchObservedRunningTime="2025-07-12 00:43:14.28682326 +0000 UTC m=+24.849008031" Jul 12 00:43:14.472473 systemd-networkd[1640]: lxc_health: Gained IPv6LL Jul 12 00:43:14.536398 systemd-networkd[1640]: lxca5f298cf10b5: Gained IPv6LL Jul 12 00:43:14.665740 systemd-networkd[1640]: lxcdce01ae86403: Gained IPv6LL Jul 12 00:43:17.030361 env[1478]: time="2025-07-12T00:43:17.030133276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:43:17.030361 env[1478]: time="2025-07-12T00:43:17.030178434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:43:17.030361 env[1478]: time="2025-07-12T00:43:17.030188314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:43:17.030755 env[1478]: time="2025-07-12T00:43:17.030497423Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d9db521ab27b806bcf3fcba7b7791792edc19e73a0d331983c22d228b0528e7 pid=3620 runtime=io.containerd.runc.v2 Jul 12 00:43:17.047672 env[1478]: time="2025-07-12T00:43:17.047551029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:43:17.047834 env[1478]: time="2025-07-12T00:43:17.047683264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:43:17.047834 env[1478]: time="2025-07-12T00:43:17.047711023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:43:17.047944 env[1478]: time="2025-07-12T00:43:17.047889617Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c0770c7ec814c1ca2f1508e093755febbaa68765d5a7c3a4dc92276349483cd pid=3638 runtime=io.containerd.runc.v2 Jul 12 00:43:17.061132 systemd[1]: run-containerd-runc-k8s.io-9d9db521ab27b806bcf3fcba7b7791792edc19e73a0d331983c22d228b0528e7-runc.1x4tAL.mount: Deactivated successfully. Jul 12 00:43:17.072890 systemd[1]: Started cri-containerd-9d9db521ab27b806bcf3fcba7b7791792edc19e73a0d331983c22d228b0528e7.scope. Jul 12 00:43:17.087330 systemd[1]: Started cri-containerd-4c0770c7ec814c1ca2f1508e093755febbaa68765d5a7c3a4dc92276349483cd.scope. Jul 12 00:43:17.126982 env[1478]: time="2025-07-12T00:43:17.126928421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w22ml,Uid:de677763-5ee6-4bf1-8580-dbf5ee8ce3ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c0770c7ec814c1ca2f1508e093755febbaa68765d5a7c3a4dc92276349483cd\"" Jul 12 00:43:17.130186 env[1478]: time="2025-07-12T00:43:17.130126109Z" level=info msg="CreateContainer within sandbox \"4c0770c7ec814c1ca2f1508e093755febbaa68765d5a7c3a4dc92276349483cd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:43:17.153795 env[1478]: time="2025-07-12T00:43:17.153735046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-btg4g,Uid:f2147cb5-175d-4bd9-950e-d30c06cce5da,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d9db521ab27b806bcf3fcba7b7791792edc19e73a0d331983c22d228b0528e7\"" Jul 12 00:43:17.157034 env[1478]: time="2025-07-12T00:43:17.156972853Z" level=info msg="CreateContainer within sandbox \"9d9db521ab27b806bcf3fcba7b7791792edc19e73a0d331983c22d228b0528e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:43:17.208579 env[1478]: time="2025-07-12T00:43:17.208515976Z" level=info msg="CreateContainer within sandbox \"4c0770c7ec814c1ca2f1508e093755febbaa68765d5a7c3a4dc92276349483cd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5f68d074d008e6b64637fb138b9cb1e7f852c11036f4883118b9a128a008596\"" Jul 12 00:43:17.210345 env[1478]: time="2025-07-12T00:43:17.210298474Z" level=info msg="StartContainer for \"a5f68d074d008e6b64637fb138b9cb1e7f852c11036f4883118b9a128a008596\"" Jul 12 00:43:17.231318 systemd[1]: Started cri-containerd-a5f68d074d008e6b64637fb138b9cb1e7f852c11036f4883118b9a128a008596.scope. Jul 12 00:43:17.240485 env[1478]: time="2025-07-12T00:43:17.240419783Z" level=info msg="CreateContainer within sandbox \"9d9db521ab27b806bcf3fcba7b7791792edc19e73a0d331983c22d228b0528e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e811d1978b0514678285b339f10b081288e65c7f7bf2a162af2a05894df3c14\"" Jul 12 00:43:17.241211 env[1478]: time="2025-07-12T00:43:17.241168997Z" level=info msg="StartContainer for \"5e811d1978b0514678285b339f10b081288e65c7f7bf2a162af2a05894df3c14\"" Jul 12 00:43:17.271874 systemd[1]: Started cri-containerd-5e811d1978b0514678285b339f10b081288e65c7f7bf2a162af2a05894df3c14.scope. Jul 12 00:43:17.286690 env[1478]: time="2025-07-12T00:43:17.285810601Z" level=info msg="StartContainer for \"a5f68d074d008e6b64637fb138b9cb1e7f852c11036f4883118b9a128a008596\" returns successfully" Jul 12 00:43:17.331954 env[1478]: time="2025-07-12T00:43:17.331841356Z" level=info msg="StartContainer for \"5e811d1978b0514678285b339f10b081288e65c7f7bf2a162af2a05894df3c14\" returns successfully" Jul 12 00:43:17.714697 kubelet[2435]: I0712 00:43:17.714558 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-w22ml" podStartSLOduration=21.714536931 podStartE2EDuration="21.714536931s" podCreationTimestamp="2025-07-12 00:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:43:17.690599766 +0000 UTC m=+28.252784577" watchObservedRunningTime="2025-07-12 00:43:17.714536931 +0000 UTC m=+28.276721742" Jul 12 00:43:17.754895 kubelet[2435]: I0712 00:43:17.754825 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-btg4g" podStartSLOduration=21.754807327 podStartE2EDuration="21.754807327s" podCreationTimestamp="2025-07-12 00:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:43:17.716345748 +0000 UTC m=+28.278530519" watchObservedRunningTime="2025-07-12 00:43:17.754807327 +0000 UTC m=+28.316992098" Jul 12 00:44:54.917449 systemd[1]: Started sshd@5-10.200.20.11:22-10.200.16.10:40780.service. Jul 12 00:44:55.404925 sshd[3800]: Accepted publickey for core from 10.200.16.10 port 40780 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:44:55.406668 sshd[3800]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:44:55.411073 systemd[1]: Started session-8.scope. Jul 12 00:44:55.412412 systemd-logind[1470]: New session 8 of user core. Jul 12 00:44:55.908640 sshd[3800]: pam_unix(sshd:session): session closed for user core Jul 12 00:44:55.911329 systemd[1]: sshd@5-10.200.20.11:22-10.200.16.10:40780.service: Deactivated successfully. Jul 12 00:44:55.912052 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:44:55.912827 systemd-logind[1470]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:44:55.913609 systemd-logind[1470]: Removed session 8. Jul 12 00:45:00.992638 systemd[1]: Started sshd@6-10.200.20.11:22-10.200.16.10:51058.service. Jul 12 00:45:01.481698 sshd[3814]: Accepted publickey for core from 10.200.16.10 port 51058 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:45:01.483455 sshd[3814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:45:01.488012 systemd[1]: Started session-9.scope. Jul 12 00:45:01.488673 systemd-logind[1470]: New session 9 of user core. Jul 12 00:45:01.903980 sshd[3814]: pam_unix(sshd:session): session closed for user core Jul 12 00:45:01.906925 systemd-logind[1470]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:45:01.907111 systemd[1]: sshd@6-10.200.20.11:22-10.200.16.10:51058.service: Deactivated successfully. Jul 12 00:45:01.907847 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:45:01.908759 systemd-logind[1470]: Removed session 9. Jul 12 00:45:06.984530 systemd[1]: Started sshd@7-10.200.20.11:22-10.200.16.10:51072.service. Jul 12 00:45:07.457994 sshd[3826]: Accepted publickey for core from 10.200.16.10 port 51072 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:45:07.459685 sshd[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:45:07.463800 systemd-logind[1470]: New session 10 of user core. Jul 12 00:45:07.464330 systemd[1]: Started session-10.scope. Jul 12 00:45:07.869511 sshd[3826]: pam_unix(sshd:session): session closed for user core Jul 12 00:45:07.872433 systemd-logind[1470]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:45:07.872790 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:45:07.873872 systemd-logind[1470]: Removed session 10. Jul 12 00:45:07.874313 systemd[1]: sshd@7-10.200.20.11:22-10.200.16.10:51072.service: Deactivated successfully. Jul 12 00:45:12.956654 systemd[1]: Started sshd@8-10.200.20.11:22-10.200.16.10:51146.service. Jul 12 00:45:13.445157 sshd[3839]: Accepted publickey for core from 10.200.16.10 port 51146 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:45:13.446910 sshd[3839]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:45:13.450844 systemd-logind[1470]: New session 11 of user core. Jul 12 00:45:13.451380 systemd[1]: Started session-11.scope. Jul 12 00:45:13.861518 sshd[3839]: pam_unix(sshd:session): session closed for user core Jul 12 00:45:13.864766 systemd-logind[1470]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:45:13.865979 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:45:13.866569 systemd[1]: sshd@8-10.200.20.11:22-10.200.16.10:51146.service: Deactivated successfully. Jul 12 00:45:13.867638 systemd-logind[1470]: Removed session 11. Jul 12 00:45:13.940628 systemd[1]: Started sshd@9-10.200.20.11:22-10.200.16.10:51162.service. Jul 12 00:45:14.415446 sshd[3852]: Accepted publickey for core from 10.200.16.10 port 51162 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:45:14.417466 sshd[3852]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:45:14.422943 systemd[1]: Started session-12.scope. Jul 12 00:45:14.423923 systemd-logind[1470]: New session 12 of user core. Jul 12 00:45:14.886169 sshd[3852]: pam_unix(sshd:session): session closed for user core Jul 12 00:45:14.889339 systemd[1]: sshd@9-10.200.20.11:22-10.200.16.10:51162.service: Deactivated successfully. Jul 12 00:45:14.890086 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:45:14.891120 systemd-logind[1470]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:45:14.892342 systemd-logind[1470]: Removed session 12. Jul 12 00:45:14.967100 systemd[1]: Started sshd@10-10.200.20.11:22-10.200.16.10:51174.service. Jul 12 00:45:15.440713 sshd[3861]: Accepted publickey for core from 10.200.16.10 port 51174 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:45:15.442402 sshd[3861]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:45:15.446354 systemd-logind[1470]: New session 13 of user core. Jul 12 00:45:15.446870 systemd[1]: Started session-13.scope. Jul 12 00:45:15.850658 sshd[3861]: pam_unix(sshd:session): session closed for user core Jul 12 00:45:15.853267 systemd-logind[1470]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:45:15.853471 systemd[1]: sshd@10-10.200.20.11:22-10.200.16.10:51174.service: Deactivated successfully. Jul 12 00:45:15.854192 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:45:15.854937 systemd-logind[1470]: Removed session 13. Jul 12 00:45:20.936939 systemd[1]: Started sshd@11-10.200.20.11:22-10.200.16.10:44776.service. Jul 12 00:45:21.411033 sshd[3874]: Accepted publickey for core from 10.200.16.10 port 44776 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:45:21.412729 sshd[3874]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:45:21.417128 systemd[1]: Started session-14.scope. Jul 12 00:45:21.418197 systemd-logind[1470]: New session 14 of user core. Jul 12 00:45:21.820304 sshd[3874]: pam_unix(sshd:session): session closed for user core Jul 12 00:45:21.822822 systemd[1]: sshd@11-10.200.20.11:22-10.200.16.10:44776.service: Deactivated successfully. Jul 12 00:45:21.823601 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:45:21.824188 systemd-logind[1470]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:45:21.824998 systemd-logind[1470]: Removed session 14. Jul 12 00:45:26.890665 systemd[1]: Started sshd@12-10.200.20.11:22-10.200.16.10:44784.service. Jul 12 00:45:27.324255 sshd[3888]: Accepted publickey for core from 10.200.16.10 port 44784 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:45:27.326528 sshd[3888]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:45:27.331536 systemd[1]: Started session-15.scope. Jul 12 00:45:27.333024 systemd-logind[1470]: New session 15 of user core. Jul 12 00:45:27.712220 sshd[3888]: pam_unix(sshd:session): session closed for user core Jul 12 00:45:27.715070 systemd[1]: sshd@12-10.200.20.11:22-10.200.16.10:44784.service: Deactivated successfully. Jul 12 00:45:27.715915 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:45:27.716993 systemd-logind[1470]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:45:27.717762 systemd-logind[1470]: Removed session 15. Jul 12 00:45:27.798306 systemd[1]: Started sshd@13-10.200.20.11:22-10.200.16.10:44794.service. Jul 12 00:45:28.285670 sshd[3900]: Accepted publickey for core from 10.200.16.10 port 44794 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:45:28.287315 sshd[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:45:28.292861 systemd[1]: Started session-16.scope. Jul 12 00:45:28.294375 systemd-logind[1470]: New session 16 of user core. Jul 12 00:45:28.832302 sshd[3900]: pam_unix(sshd:session): session closed for user core Jul 12 00:45:28.834971 systemd[1]: sshd@13-10.200.20.11:22-10.200.16.10:44794.service: Deactivated successfully. Jul 12 00:45:28.835722 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:45:28.836305 systemd-logind[1470]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:45:28.837095 systemd-logind[1470]: Removed session 16. Jul 12 00:45:28.908372 systemd[1]: Started sshd@14-10.200.20.11:22-10.200.16.10:44810.service. Jul 12 00:45:29.357333 sshd[3910]: Accepted publickey for core from 10.200.16.10 port 44810 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:45:29.358969 sshd[3910]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:45:29.363541 systemd[1]: Started session-17.scope. Jul 12 00:45:29.364597 systemd-logind[1470]: New session 17 of user core. Jul 12 00:45:30.331705 sshd[3910]: pam_unix(sshd:session): session closed for user core Jul 12 00:45:30.334208 systemd[1]: sshd@14-10.200.20.11:22-10.200.16.10:44810.service: Deactivated successfully. Jul 12 00:45:30.334920 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:45:30.335234 systemd-logind[1470]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:45:30.336063 systemd-logind[1470]: Removed session 17. Jul 12 00:45:30.412472 systemd[1]: Started sshd@15-10.200.20.11:22-10.200.16.10:47058.service. Jul 12 00:45:30.898875 sshd[3928]: Accepted publickey for core from 10.200.16.10 port 47058 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:45:30.900264 sshd[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:45:30.904513 systemd-logind[1470]: New session 18 of user core. Jul 12 00:45:30.904992 systemd[1]: Started session-18.scope. Jul 12 00:45:31.423905 sshd[3928]: pam_unix(sshd:session): session closed for user core Jul 12 00:45:31.426851 systemd[1]: sshd@15-10.200.20.11:22-10.200.16.10:47058.service: Deactivated successfully. Jul 12 00:45:31.427565 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:45:31.427971 systemd-logind[1470]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:45:31.428686 systemd-logind[1470]: Removed session 18. Jul 12 00:45:31.505745 systemd[1]: Started sshd@16-10.200.20.11:22-10.200.16.10:47066.service. Jul 12 00:45:31.992941 sshd[3938]: Accepted publickey for core from 10.200.16.10 port 47066 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:45:31.994598 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:45:31.998896 systemd[1]: Started session-19.scope. Jul 12 00:45:31.999452 systemd-logind[1470]: New session 19 of user core. Jul 12 00:45:32.409738 sshd[3938]: pam_unix(sshd:session): session closed for user core Jul 12 00:45:32.413865 systemd-logind[1470]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:45:32.414051 systemd[1]: sshd@16-10.200.20.11:22-10.200.16.10:47066.service: Deactivated successfully. Jul 12 00:45:32.414777 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:45:32.415701 systemd-logind[1470]: Removed session 19. Jul 12 00:45:37.490060 systemd[1]: Started sshd@17-10.200.20.11:22-10.200.16.10:47072.service. Jul 12 00:45:37.963661 sshd[3952]: Accepted publickey for core from 10.200.16.10 port 47072 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:45:37.965807 sshd[3952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:45:37.971489 systemd[1]: Started session-20.scope. Jul 12 00:45:37.971817 systemd-logind[1470]: New session 20 of user core. Jul 12 00:45:38.371809 sshd[3952]: pam_unix(sshd:session): session closed for user core Jul 12 00:45:38.375455 systemd[1]: sshd@17-10.200.20.11:22-10.200.16.10:47072.service: Deactivated successfully. Jul 12 00:45:38.376471 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:45:38.377876 systemd-logind[1470]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:45:38.379078 systemd-logind[1470]: Removed session 20. Jul 12 00:45:43.455324 systemd[1]: Started sshd@18-10.200.20.11:22-10.200.16.10:58008.service. Jul 12 00:45:43.942113 sshd[3964]: Accepted publickey for core from 10.200.16.10 port 58008 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:45:43.943722 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:45:43.948175 systemd[1]: Started session-21.scope. Jul 12 00:45:43.948510 systemd-logind[1470]: New session 21 of user core. Jul 12 00:45:44.352913 sshd[3964]: pam_unix(sshd:session): session closed for user core Jul 12 00:45:44.355959 systemd[1]: sshd@18-10.200.20.11:22-10.200.16.10:58008.service: Deactivated successfully. Jul 12 00:45:44.356729 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:45:44.357258 systemd-logind[1470]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:45:44.358025 systemd-logind[1470]: Removed session 21. Jul 12 00:45:49.429413 systemd[1]: Started sshd@19-10.200.20.11:22-10.200.16.10:58018.service. Jul 12 00:45:49.877201 sshd[3976]: Accepted publickey for core from 10.200.16.10 port 58018 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:45:49.879083 sshd[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:45:49.883971 systemd[1]: Started session-22.scope. Jul 12 00:45:49.884512 systemd-logind[1470]: New session 22 of user core. Jul 12 00:45:50.264896 sshd[3976]: pam_unix(sshd:session): session closed for user core Jul 12 00:45:50.267493 systemd[1]: sshd@19-10.200.20.11:22-10.200.16.10:58018.service: Deactivated successfully. Jul 12 00:45:50.268178 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:45:50.268711 systemd-logind[1470]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:45:50.269385 systemd-logind[1470]: Removed session 22. Jul 12 00:45:50.347463 systemd[1]: Started sshd@20-10.200.20.11:22-10.200.16.10:54252.service. Jul 12 00:45:50.835453 sshd[3990]: Accepted publickey for core from 10.200.16.10 port 54252 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:45:50.837076 sshd[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:45:50.840918 systemd-logind[1470]: New session 23 of user core. Jul 12 00:45:50.841373 systemd[1]: Started session-23.scope. Jul 12 00:45:53.632453 env[1478]: time="2025-07-12T00:45:53.632410878Z" level=info msg="StopContainer for \"ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2\" with timeout 30 (s)" Jul 12 00:45:53.633250 env[1478]: time="2025-07-12T00:45:53.633220048Z" level=info msg="Stop container \"ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2\" with signal terminated" Jul 12 00:45:53.651633 env[1478]: time="2025-07-12T00:45:53.651557694Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:45:53.652569 systemd[1]: cri-containerd-ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2.scope: Deactivated successfully. Jul 12 00:45:53.665113 env[1478]: time="2025-07-12T00:45:53.665053875Z" level=info msg="StopContainer for \"58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d\" with timeout 2 (s)" Jul 12 00:45:53.665393 env[1478]: time="2025-07-12T00:45:53.665361879Z" level=info msg="Stop container \"58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d\" with signal terminated" Jul 12 00:45:53.674520 systemd-networkd[1640]: lxc_health: Link DOWN Jul 12 00:45:53.674528 systemd-networkd[1640]: lxc_health: Lost carrier Jul 12 00:45:53.678516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2-rootfs.mount: Deactivated successfully. Jul 12 00:45:53.703211 systemd[1]: cri-containerd-58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d.scope: Deactivated successfully. Jul 12 00:45:53.703568 systemd[1]: cri-containerd-58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d.scope: Consumed 6.643s CPU time. Jul 12 00:45:53.721511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d-rootfs.mount: Deactivated successfully. Jul 12 00:45:53.760048 env[1478]: time="2025-07-12T00:45:53.759991386Z" level=info msg="shim disconnected" id=58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d Jul 12 00:45:53.760048 env[1478]: time="2025-07-12T00:45:53.760043147Z" level=warning msg="cleaning up after shim disconnected" id=58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d namespace=k8s.io Jul 12 00:45:53.760048 env[1478]: time="2025-07-12T00:45:53.760053907Z" level=info msg="cleaning up dead shim" Jul 12 00:45:53.760411 env[1478]: time="2025-07-12T00:45:53.760240470Z" level=info msg="shim disconnected" id=ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2 Jul 12 00:45:53.760411 env[1478]: time="2025-07-12T00:45:53.760297750Z" level=warning msg="cleaning up after shim disconnected" id=ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2 namespace=k8s.io Jul 12 00:45:53.760411 env[1478]: time="2025-07-12T00:45:53.760306790Z" level=info msg="cleaning up dead shim" Jul 12 00:45:53.768639 env[1478]: time="2025-07-12T00:45:53.768591301Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:45:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4060 runtime=io.containerd.runc.v2\n" Jul 12 00:45:53.770301 env[1478]: time="2025-07-12T00:45:53.770230883Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:45:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4059 runtime=io.containerd.runc.v2\n" Jul 12 00:45:53.777573 env[1478]: time="2025-07-12T00:45:53.777524861Z" level=info msg="StopContainer for \"ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2\" returns successfully" Jul 12 00:45:53.778365 env[1478]: time="2025-07-12T00:45:53.778335912Z" level=info msg="StopPodSandbox for \"88fb28c2fcefac2eb1dd4b54068d38be9eba1267d7e3266363864a77d1f32d97\"" Jul 12 00:45:53.778611 env[1478]: time="2025-07-12T00:45:53.778588435Z" level=info msg="Container to stop \"ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:45:53.780495 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88fb28c2fcefac2eb1dd4b54068d38be9eba1267d7e3266363864a77d1f32d97-shm.mount: Deactivated successfully. Jul 12 00:45:53.781889 env[1478]: time="2025-07-12T00:45:53.780499461Z" level=info msg="StopContainer for \"58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d\" returns successfully" Jul 12 00:45:53.781889 env[1478]: time="2025-07-12T00:45:53.780980867Z" level=info msg="StopPodSandbox for \"f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c\"" Jul 12 00:45:53.781889 env[1478]: time="2025-07-12T00:45:53.781048988Z" level=info msg="Container to stop \"b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:45:53.781889 env[1478]: time="2025-07-12T00:45:53.781063588Z" level=info msg="Container to stop \"7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:45:53.781889 env[1478]: time="2025-07-12T00:45:53.781093629Z" level=info msg="Container to stop \"70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:45:53.781889 env[1478]: time="2025-07-12T00:45:53.781107229Z" level=info msg="Container to stop \"16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:45:53.781889 env[1478]: time="2025-07-12T00:45:53.781118349Z" level=info msg="Container to stop \"58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:45:53.787209 systemd[1]: cri-containerd-f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c.scope: Deactivated successfully. Jul 12 00:45:53.790399 systemd[1]: cri-containerd-88fb28c2fcefac2eb1dd4b54068d38be9eba1267d7e3266363864a77d1f32d97.scope: Deactivated successfully. Jul 12 00:45:53.836935 env[1478]: time="2025-07-12T00:45:53.836874776Z" level=info msg="shim disconnected" id=88fb28c2fcefac2eb1dd4b54068d38be9eba1267d7e3266363864a77d1f32d97 Jul 12 00:45:53.836935 env[1478]: time="2025-07-12T00:45:53.836934657Z" level=warning msg="cleaning up after shim disconnected" id=88fb28c2fcefac2eb1dd4b54068d38be9eba1267d7e3266363864a77d1f32d97 namespace=k8s.io Jul 12 00:45:53.837164 env[1478]: time="2025-07-12T00:45:53.836947177Z" level=info msg="cleaning up dead shim" Jul 12 00:45:53.837164 env[1478]: time="2025-07-12T00:45:53.837110259Z" level=info msg="shim disconnected" id=f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c Jul 12 00:45:53.837164 env[1478]: time="2025-07-12T00:45:53.837136139Z" level=warning msg="cleaning up after shim disconnected" id=f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c namespace=k8s.io Jul 12 00:45:53.837164 env[1478]: time="2025-07-12T00:45:53.837156100Z" level=info msg="cleaning up dead shim" Jul 12 00:45:53.845741 env[1478]: time="2025-07-12T00:45:53.845684414Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:45:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4124 runtime=io.containerd.runc.v2\n" Jul 12 00:45:53.846055 env[1478]: time="2025-07-12T00:45:53.846017818Z" level=info msg="TearDown network for sandbox \"88fb28c2fcefac2eb1dd4b54068d38be9eba1267d7e3266363864a77d1f32d97\" successfully" Jul 12 00:45:53.846055 env[1478]: time="2025-07-12T00:45:53.846048939Z" level=info msg="StopPodSandbox for \"88fb28c2fcefac2eb1dd4b54068d38be9eba1267d7e3266363864a77d1f32d97\" returns successfully" Jul 12 00:45:53.846213 env[1478]: time="2025-07-12T00:45:53.846193301Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:45:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4125 runtime=io.containerd.runc.v2\n" Jul 12 00:45:53.846975 env[1478]: time="2025-07-12T00:45:53.846842349Z" level=info msg="TearDown network for sandbox \"f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c\" successfully" Jul 12 00:45:53.846975 env[1478]: time="2025-07-12T00:45:53.846874430Z" level=info msg="StopPodSandbox for \"f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c\" returns successfully" Jul 12 00:45:53.906184 kubelet[2435]: I0712 00:45:53.906057 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-799dn\" (UniqueName: \"kubernetes.io/projected/1e23e555-b47e-4e91-b378-2e289e697af1-kube-api-access-799dn\") pod \"1e23e555-b47e-4e91-b378-2e289e697af1\" (UID: \"1e23e555-b47e-4e91-b378-2e289e697af1\") " Jul 12 00:45:53.906184 kubelet[2435]: I0712 00:45:53.906117 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e23e555-b47e-4e91-b378-2e289e697af1-cilium-config-path\") pod \"1e23e555-b47e-4e91-b378-2e289e697af1\" (UID: \"1e23e555-b47e-4e91-b378-2e289e697af1\") " Jul 12 00:45:53.910226 kubelet[2435]: I0712 00:45:53.908513 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e23e555-b47e-4e91-b378-2e289e697af1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1e23e555-b47e-4e91-b378-2e289e697af1" (UID: "1e23e555-b47e-4e91-b378-2e289e697af1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:45:53.910827 kubelet[2435]: I0712 00:45:53.910792 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e23e555-b47e-4e91-b378-2e289e697af1-kube-api-access-799dn" (OuterVolumeSpecName: "kube-api-access-799dn") pod "1e23e555-b47e-4e91-b378-2e289e697af1" (UID: "1e23e555-b47e-4e91-b378-2e289e697af1"). InnerVolumeSpecName "kube-api-access-799dn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:45:53.955662 kubelet[2435]: I0712 00:45:53.955629 2435 scope.go:117] "RemoveContainer" containerID="ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2" Jul 12 00:45:53.957853 env[1478]: time="2025-07-12T00:45:53.957800675Z" level=info msg="RemoveContainer for \"ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2\"" Jul 12 00:45:53.959707 systemd[1]: Removed slice kubepods-besteffort-pod1e23e555_b47e_4e91_b378_2e289e697af1.slice. Jul 12 00:45:53.977165 env[1478]: time="2025-07-12T00:45:53.977098774Z" level=info msg="RemoveContainer for \"ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2\" returns successfully" Jul 12 00:45:53.977517 kubelet[2435]: I0712 00:45:53.977490 2435 scope.go:117] "RemoveContainer" containerID="ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2" Jul 12 00:45:53.982240 env[1478]: time="2025-07-12T00:45:53.982118641Z" level=error msg="ContainerStatus for \"ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2\": not found" Jul 12 00:45:53.984078 kubelet[2435]: E0712 00:45:53.984039 2435 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2\": not found" containerID="ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2" Jul 12 00:45:53.985089 kubelet[2435]: I0712 00:45:53.984993 2435 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2"} err="failed to get container status \"ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2\": rpc error: code = NotFound desc = an error occurred when try to find container \"ccbc4ed63b5f3dca4b189ab232dd2fb5c118e581b7be20f94365492cbe8f0eb2\": not found" Jul 12 00:45:53.985204 kubelet[2435]: I0712 00:45:53.985190 2435 scope.go:117] "RemoveContainer" containerID="58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d" Jul 12 00:45:53.986873 env[1478]: time="2025-07-12T00:45:53.986829664Z" level=info msg="RemoveContainer for \"58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d\"" Jul 12 00:45:54.000985 env[1478]: time="2025-07-12T00:45:54.000936333Z" level=info msg="RemoveContainer for \"58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d\" returns successfully" Jul 12 00:45:54.001249 kubelet[2435]: I0712 00:45:54.001219 2435 scope.go:117] "RemoveContainer" containerID="16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b" Jul 12 00:45:54.002660 env[1478]: time="2025-07-12T00:45:54.002620995Z" level=info msg="RemoveContainer for \"16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b\"" Jul 12 00:45:54.007084 kubelet[2435]: I0712 00:45:54.006946 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-cilium-run\") pod \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " Jul 12 00:45:54.007264 kubelet[2435]: I0712 00:45:54.007246 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-bpf-maps\") pod \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " Jul 12 00:45:54.007414 kubelet[2435]: I0712 00:45:54.007398 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-cilium-config-path\") pod \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " Jul 12 00:45:54.007511 kubelet[2435]: I0712 00:45:54.007497 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-cni-path\") pod \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " Jul 12 00:45:54.007604 kubelet[2435]: I0712 00:45:54.007592 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-hostproc\") pod \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " Jul 12 00:45:54.007690 kubelet[2435]: I0712 00:45:54.007677 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-etc-cni-netd\") pod \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " Jul 12 00:45:54.007776 kubelet[2435]: I0712 00:45:54.007763 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-cilium-cgroup\") pod \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " Jul 12 00:45:54.007863 kubelet[2435]: I0712 00:45:54.007850 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-xtables-lock\") pod \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " Jul 12 00:45:54.007949 kubelet[2435]: I0712 00:45:54.007937 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-lib-modules\") pod \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " Jul 12 00:45:54.008042 kubelet[2435]: I0712 00:45:54.008030 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-host-proc-sys-net\") pod \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " Jul 12 00:45:54.008130 kubelet[2435]: I0712 00:45:54.008118 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-host-proc-sys-kernel\") pod \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " Jul 12 00:45:54.008220 kubelet[2435]: I0712 00:45:54.008208 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm2cq\" (UniqueName: \"kubernetes.io/projected/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-kube-api-access-wm2cq\") pod \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " Jul 12 00:45:54.008335 kubelet[2435]: I0712 00:45:54.008305 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-hubble-tls\") pod \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " Jul 12 00:45:54.008438 kubelet[2435]: I0712 00:45:54.008425 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-clustermesh-secrets\") pod \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\" (UID: \"652fec4d-7a68-4a42-a474-fc9a7ab8cf73\") " Jul 12 00:45:54.008584 kubelet[2435]: I0712 00:45:54.008557 2435 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-799dn\" (UniqueName: \"kubernetes.io/projected/1e23e555-b47e-4e91-b378-2e289e697af1-kube-api-access-799dn\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:54.008670 kubelet[2435]: I0712 00:45:54.008643 2435 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e23e555-b47e-4e91-b378-2e289e697af1-cilium-config-path\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:54.009391 kubelet[2435]: I0712 00:45:54.007021 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "652fec4d-7a68-4a42-a474-fc9a7ab8cf73" (UID: "652fec4d-7a68-4a42-a474-fc9a7ab8cf73"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:54.009526 kubelet[2435]: I0712 00:45:54.007331 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "652fec4d-7a68-4a42-a474-fc9a7ab8cf73" (UID: "652fec4d-7a68-4a42-a474-fc9a7ab8cf73"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:54.009615 kubelet[2435]: I0712 00:45:54.009581 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "652fec4d-7a68-4a42-a474-fc9a7ab8cf73" (UID: "652fec4d-7a68-4a42-a474-fc9a7ab8cf73"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:45:54.009669 kubelet[2435]: I0712 00:45:54.009637 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "652fec4d-7a68-4a42-a474-fc9a7ab8cf73" (UID: "652fec4d-7a68-4a42-a474-fc9a7ab8cf73"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:54.009669 kubelet[2435]: I0712 00:45:54.009655 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-hostproc" (OuterVolumeSpecName: "hostproc") pod "652fec4d-7a68-4a42-a474-fc9a7ab8cf73" (UID: "652fec4d-7a68-4a42-a474-fc9a7ab8cf73"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:54.009724 kubelet[2435]: I0712 00:45:54.009670 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "652fec4d-7a68-4a42-a474-fc9a7ab8cf73" (UID: "652fec4d-7a68-4a42-a474-fc9a7ab8cf73"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:54.009724 kubelet[2435]: I0712 00:45:54.009684 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "652fec4d-7a68-4a42-a474-fc9a7ab8cf73" (UID: "652fec4d-7a68-4a42-a474-fc9a7ab8cf73"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:54.009724 kubelet[2435]: I0712 00:45:54.009698 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "652fec4d-7a68-4a42-a474-fc9a7ab8cf73" (UID: "652fec4d-7a68-4a42-a474-fc9a7ab8cf73"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:54.009724 kubelet[2435]: I0712 00:45:54.009711 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "652fec4d-7a68-4a42-a474-fc9a7ab8cf73" (UID: "652fec4d-7a68-4a42-a474-fc9a7ab8cf73"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:54.009824 kubelet[2435]: I0712 00:45:54.009727 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "652fec4d-7a68-4a42-a474-fc9a7ab8cf73" (UID: "652fec4d-7a68-4a42-a474-fc9a7ab8cf73"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:54.009890 kubelet[2435]: I0712 00:45:54.009870 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-cni-path" (OuterVolumeSpecName: "cni-path") pod "652fec4d-7a68-4a42-a474-fc9a7ab8cf73" (UID: "652fec4d-7a68-4a42-a474-fc9a7ab8cf73"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:54.012869 kubelet[2435]: I0712 00:45:54.012819 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "652fec4d-7a68-4a42-a474-fc9a7ab8cf73" (UID: "652fec4d-7a68-4a42-a474-fc9a7ab8cf73"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:45:54.013237 kubelet[2435]: I0712 00:45:54.013202 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-kube-api-access-wm2cq" (OuterVolumeSpecName: "kube-api-access-wm2cq") pod "652fec4d-7a68-4a42-a474-fc9a7ab8cf73" (UID: "652fec4d-7a68-4a42-a474-fc9a7ab8cf73"). InnerVolumeSpecName "kube-api-access-wm2cq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:45:54.016437 kubelet[2435]: I0712 00:45:54.016398 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "652fec4d-7a68-4a42-a474-fc9a7ab8cf73" (UID: "652fec4d-7a68-4a42-a474-fc9a7ab8cf73"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:45:54.016577 env[1478]: time="2025-07-12T00:45:54.016467736Z" level=info msg="RemoveContainer for \"16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b\" returns successfully" Jul 12 00:45:54.016866 kubelet[2435]: I0712 00:45:54.016770 2435 scope.go:117] "RemoveContainer" containerID="70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57" Jul 12 00:45:54.018697 env[1478]: time="2025-07-12T00:45:54.018372321Z" level=info msg="RemoveContainer for \"70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57\"" Jul 12 00:45:54.035096 env[1478]: time="2025-07-12T00:45:54.035051500Z" level=info msg="RemoveContainer for \"70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57\" returns successfully" Jul 12 00:45:54.035540 kubelet[2435]: I0712 00:45:54.035516 2435 scope.go:117] "RemoveContainer" containerID="7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794" Jul 12 00:45:54.036994 env[1478]: time="2025-07-12T00:45:54.036952964Z" level=info msg="RemoveContainer for \"7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794\"" Jul 12 00:45:54.050162 env[1478]: time="2025-07-12T00:45:54.050111217Z" level=info msg="RemoveContainer for \"7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794\" returns successfully" Jul 12 00:45:54.050498 kubelet[2435]: I0712 00:45:54.050476 2435 scope.go:117] "RemoveContainer" containerID="b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e" Jul 12 00:45:54.052182 env[1478]: time="2025-07-12T00:45:54.051920200Z" level=info msg="RemoveContainer for \"b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e\"" Jul 12 00:45:54.061611 env[1478]: time="2025-07-12T00:45:54.061572487Z" level=info msg="RemoveContainer for \"b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e\" returns successfully" Jul 12 00:45:54.062079 kubelet[2435]: I0712 00:45:54.062056 2435 scope.go:117] "RemoveContainer" containerID="58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d" Jul 12 00:45:54.062523 env[1478]: time="2025-07-12T00:45:54.062413658Z" level=error msg="ContainerStatus for \"58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d\": not found" Jul 12 00:45:54.062753 kubelet[2435]: E0712 00:45:54.062685 2435 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d\": not found" containerID="58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d" Jul 12 00:45:54.062857 kubelet[2435]: I0712 00:45:54.062762 2435 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d"} err="failed to get container status \"58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d\": rpc error: code = NotFound desc = an error occurred when try to find container \"58e916a3597ce8f7f69fc38748d5597a9bb06c6d459749e1171037c95954227d\": not found" Jul 12 00:45:54.062906 kubelet[2435]: I0712 00:45:54.062876 2435 scope.go:117] "RemoveContainer" containerID="16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b" Jul 12 00:45:54.063171 env[1478]: time="2025-07-12T00:45:54.063111707Z" level=error msg="ContainerStatus for \"16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b\": not found" Jul 12 00:45:54.063381 kubelet[2435]: E0712 00:45:54.063351 2435 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b\": not found" containerID="16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b" Jul 12 00:45:54.063490 kubelet[2435]: I0712 00:45:54.063468 2435 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b"} err="failed to get container status \"16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b\": rpc error: code = NotFound desc = an error occurred when try to find container \"16cb4b74f4de5dd90439c526ce76069a1c7f4cddbea64acf15bfb16a5ffe200b\": not found" Jul 12 00:45:54.063571 kubelet[2435]: I0712 00:45:54.063560 2435 scope.go:117] "RemoveContainer" containerID="70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57" Jul 12 00:45:54.063875 env[1478]: time="2025-07-12T00:45:54.063823396Z" level=error msg="ContainerStatus for \"70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57\": not found" Jul 12 00:45:54.064109 kubelet[2435]: E0712 00:45:54.064088 2435 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57\": not found" containerID="70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57" Jul 12 00:45:54.064230 kubelet[2435]: I0712 00:45:54.064206 2435 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57"} err="failed to get container status \"70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57\": rpc error: code = NotFound desc = an error occurred when try to find container \"70bad4abb0bb7790785b3c969c976a9d793841d9354f76866572edd54e5cfc57\": not found" Jul 12 00:45:54.064328 kubelet[2435]: I0712 00:45:54.064315 2435 scope.go:117] "RemoveContainer" containerID="7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794" Jul 12 00:45:54.064617 env[1478]: time="2025-07-12T00:45:54.064568966Z" level=error msg="ContainerStatus for \"7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794\": not found" Jul 12 00:45:54.064746 kubelet[2435]: E0712 00:45:54.064719 2435 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794\": not found" containerID="7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794" Jul 12 00:45:54.064802 kubelet[2435]: I0712 00:45:54.064751 2435 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794"} err="failed to get container status \"7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794\": rpc error: code = NotFound desc = an error occurred when try to find container \"7006dbf3e817f89f9cecc57e348583a2c7dcdda4b708a9d5dfd1dd581ac70794\": not found" Jul 12 00:45:54.064802 kubelet[2435]: I0712 00:45:54.064769 2435 scope.go:117] "RemoveContainer" containerID="b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e" Jul 12 00:45:54.065083 env[1478]: time="2025-07-12T00:45:54.065033852Z" level=error msg="ContainerStatus for \"b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e\": not found" Jul 12 00:45:54.065328 kubelet[2435]: E0712 00:45:54.065306 2435 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e\": not found" containerID="b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e" Jul 12 00:45:54.065446 kubelet[2435]: I0712 00:45:54.065419 2435 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e"} err="failed to get container status \"b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b828e5724bf4811d12b457ea5a42a15d0a66c0d586fb707ad0b6fc0d8375a23e\": not found" Jul 12 00:45:54.109654 kubelet[2435]: I0712 00:45:54.109609 2435 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-cni-path\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:54.109654 kubelet[2435]: I0712 00:45:54.109646 2435 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-hostproc\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:54.109654 kubelet[2435]: I0712 00:45:54.109657 2435 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-xtables-lock\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:54.109833 kubelet[2435]: I0712 00:45:54.109667 2435 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-lib-modules\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:54.109833 kubelet[2435]: I0712 00:45:54.109678 2435 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-etc-cni-netd\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:54.109833 kubelet[2435]: I0712 00:45:54.109686 2435 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-cilium-cgroup\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:54.109833 kubelet[2435]: I0712 00:45:54.109695 2435 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-host-proc-sys-net\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:54.109833 kubelet[2435]: I0712 00:45:54.109704 2435 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:54.109833 kubelet[2435]: I0712 00:45:54.109715 2435 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-clustermesh-secrets\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:54.109833 kubelet[2435]: I0712 00:45:54.109724 2435 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wm2cq\" (UniqueName: \"kubernetes.io/projected/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-kube-api-access-wm2cq\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:54.109833 kubelet[2435]: I0712 00:45:54.109732 2435 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-hubble-tls\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:54.110010 kubelet[2435]: I0712 00:45:54.109740 2435 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-cilium-run\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:54.110010 kubelet[2435]: I0712 00:45:54.109749 2435 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-bpf-maps\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:54.110010 kubelet[2435]: I0712 00:45:54.109757 2435 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/652fec4d-7a68-4a42-a474-fc9a7ab8cf73-cilium-config-path\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:54.272752 systemd[1]: Removed slice kubepods-burstable-pod652fec4d_7a68_4a42_a474_fc9a7ab8cf73.slice. Jul 12 00:45:54.272853 systemd[1]: kubepods-burstable-pod652fec4d_7a68_4a42_a474_fc9a7ab8cf73.slice: Consumed 6.737s CPU time. Jul 12 00:45:54.618464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88fb28c2fcefac2eb1dd4b54068d38be9eba1267d7e3266363864a77d1f32d97-rootfs.mount: Deactivated successfully. Jul 12 00:45:54.618552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c-rootfs.mount: Deactivated successfully. Jul 12 00:45:54.618614 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0ff2c412480e6fa1a764f91ec70bdcd251c661c215b1877b2ae422aaf39ce8c-shm.mount: Deactivated successfully. Jul 12 00:45:54.618678 systemd[1]: var-lib-kubelet-pods-1e23e555\x2db47e\x2d4e91\x2db378\x2d2e289e697af1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d799dn.mount: Deactivated successfully. Jul 12 00:45:54.618730 systemd[1]: var-lib-kubelet-pods-652fec4d\x2d7a68\x2d4a42\x2da474\x2dfc9a7ab8cf73-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwm2cq.mount: Deactivated successfully. Jul 12 00:45:54.618785 systemd[1]: var-lib-kubelet-pods-652fec4d\x2d7a68\x2d4a42\x2da474\x2dfc9a7ab8cf73-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:45:54.618837 systemd[1]: var-lib-kubelet-pods-652fec4d\x2d7a68\x2d4a42\x2da474\x2dfc9a7ab8cf73-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:45:54.668876 kubelet[2435]: E0712 00:45:54.668815 2435 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:45:55.550252 kubelet[2435]: I0712 00:45:55.550204 2435 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e23e555-b47e-4e91-b378-2e289e697af1" path="/var/lib/kubelet/pods/1e23e555-b47e-4e91-b378-2e289e697af1/volumes" Jul 12 00:45:55.550674 kubelet[2435]: I0712 00:45:55.550648 2435 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="652fec4d-7a68-4a42-a474-fc9a7ab8cf73" path="/var/lib/kubelet/pods/652fec4d-7a68-4a42-a474-fc9a7ab8cf73/volumes" Jul 12 00:45:55.644491 sshd[3990]: pam_unix(sshd:session): session closed for user core Jul 12 00:45:55.647473 systemd[1]: sshd@20-10.200.20.11:22-10.200.16.10:54252.service: Deactivated successfully. Jul 12 00:45:55.648181 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:45:55.648364 systemd[1]: session-23.scope: Consumed 1.836s CPU time. Jul 12 00:45:55.649155 systemd-logind[1470]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:45:55.650934 systemd-logind[1470]: Removed session 23. Jul 12 00:45:55.720032 systemd[1]: Started sshd@21-10.200.20.11:22-10.200.16.10:54258.service. Jul 12 00:45:56.168185 sshd[4157]: Accepted publickey for core from 10.200.16.10 port 54258 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:45:56.169881 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:45:56.174561 systemd[1]: Started session-24.scope. Jul 12 00:45:56.175016 systemd-logind[1470]: New session 24 of user core. Jul 12 00:45:57.234031 kubelet[2435]: I0712 00:45:57.233970 2435 memory_manager.go:355] "RemoveStaleState removing state" podUID="652fec4d-7a68-4a42-a474-fc9a7ab8cf73" containerName="cilium-agent" Jul 12 00:45:57.234031 kubelet[2435]: I0712 00:45:57.234007 2435 memory_manager.go:355] "RemoveStaleState removing state" podUID="1e23e555-b47e-4e91-b378-2e289e697af1" containerName="cilium-operator" Jul 12 00:45:57.239341 systemd[1]: Created slice kubepods-burstable-pod853ff129_b42f_40d1_843e_6510c076ec23.slice. Jul 12 00:45:57.260719 sshd[4157]: pam_unix(sshd:session): session closed for user core Jul 12 00:45:57.266920 systemd-logind[1470]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:45:57.267106 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:45:57.268183 systemd[1]: sshd@21-10.200.20.11:22-10.200.16.10:54258.service: Deactivated successfully. Jul 12 00:45:57.269807 systemd-logind[1470]: Removed session 24. Jul 12 00:45:57.328989 kubelet[2435]: I0712 00:45:57.328951 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-hostproc\") pod \"cilium-k5mnc\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " pod="kube-system/cilium-k5mnc" Jul 12 00:45:57.329206 kubelet[2435]: I0712 00:45:57.329187 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/853ff129-b42f-40d1-843e-6510c076ec23-clustermesh-secrets\") pod \"cilium-k5mnc\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " pod="kube-system/cilium-k5mnc" Jul 12 00:45:57.329325 kubelet[2435]: I0712 00:45:57.329310 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-etc-cni-netd\") pod \"cilium-k5mnc\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " pod="kube-system/cilium-k5mnc" Jul 12 00:45:57.329423 kubelet[2435]: I0712 00:45:57.329409 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-host-proc-sys-kernel\") pod \"cilium-k5mnc\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " pod="kube-system/cilium-k5mnc" Jul 12 00:45:57.329519 kubelet[2435]: I0712 00:45:57.329504 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77qkb\" (UniqueName: \"kubernetes.io/projected/853ff129-b42f-40d1-843e-6510c076ec23-kube-api-access-77qkb\") pod \"cilium-k5mnc\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " pod="kube-system/cilium-k5mnc" Jul 12 00:45:57.329622 kubelet[2435]: I0712 00:45:57.329608 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-host-proc-sys-net\") pod \"cilium-k5mnc\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " pod="kube-system/cilium-k5mnc" Jul 12 00:45:57.329702 kubelet[2435]: I0712 00:45:57.329689 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-cilium-cgroup\") pod \"cilium-k5mnc\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " pod="kube-system/cilium-k5mnc" Jul 12 00:45:57.329805 kubelet[2435]: I0712 00:45:57.329792 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/853ff129-b42f-40d1-843e-6510c076ec23-cilium-config-path\") pod \"cilium-k5mnc\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " pod="kube-system/cilium-k5mnc" Jul 12 00:45:57.329894 kubelet[2435]: I0712 00:45:57.329881 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/853ff129-b42f-40d1-843e-6510c076ec23-hubble-tls\") pod \"cilium-k5mnc\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " pod="kube-system/cilium-k5mnc" Jul 12 00:45:57.329985 kubelet[2435]: I0712 00:45:57.329972 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-cilium-run\") pod \"cilium-k5mnc\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " pod="kube-system/cilium-k5mnc" Jul 12 00:45:57.330072 kubelet[2435]: I0712 00:45:57.330056 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/853ff129-b42f-40d1-843e-6510c076ec23-cilium-ipsec-secrets\") pod \"cilium-k5mnc\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " pod="kube-system/cilium-k5mnc" Jul 12 00:45:57.330160 kubelet[2435]: I0712 00:45:57.330147 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-xtables-lock\") pod \"cilium-k5mnc\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " pod="kube-system/cilium-k5mnc" Jul 12 00:45:57.330229 kubelet[2435]: I0712 00:45:57.330217 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-bpf-maps\") pod \"cilium-k5mnc\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " pod="kube-system/cilium-k5mnc" Jul 12 00:45:57.330323 kubelet[2435]: I0712 00:45:57.330309 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-lib-modules\") pod \"cilium-k5mnc\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " pod="kube-system/cilium-k5mnc" Jul 12 00:45:57.330425 kubelet[2435]: I0712 00:45:57.330411 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-cni-path\") pod \"cilium-k5mnc\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " pod="kube-system/cilium-k5mnc" Jul 12 00:45:57.344389 systemd[1]: Started sshd@22-10.200.20.11:22-10.200.16.10:54262.service. Jul 12 00:45:57.544303 env[1478]: time="2025-07-12T00:45:57.543808904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k5mnc,Uid:853ff129-b42f-40d1-843e-6510c076ec23,Namespace:kube-system,Attempt:0,}" Jul 12 00:45:57.602899 env[1478]: time="2025-07-12T00:45:57.602801826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:45:57.602899 env[1478]: time="2025-07-12T00:45:57.602849947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:45:57.603135 env[1478]: time="2025-07-12T00:45:57.602861307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:45:57.603422 env[1478]: time="2025-07-12T00:45:57.603377673Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/106accaea8830b3910d1d7cb7d83bfe8d06af47317e552b1e7ef8819b0f39a4c pid=4182 runtime=io.containerd.runc.v2 Jul 12 00:45:57.614443 systemd[1]: Started cri-containerd-106accaea8830b3910d1d7cb7d83bfe8d06af47317e552b1e7ef8819b0f39a4c.scope. Jul 12 00:45:57.640111 env[1478]: time="2025-07-12T00:45:57.640059122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k5mnc,Uid:853ff129-b42f-40d1-843e-6510c076ec23,Namespace:kube-system,Attempt:0,} returns sandbox id \"106accaea8830b3910d1d7cb7d83bfe8d06af47317e552b1e7ef8819b0f39a4c\"" Jul 12 00:45:57.643603 env[1478]: time="2025-07-12T00:45:57.643552644Z" level=info msg="CreateContainer within sandbox \"106accaea8830b3910d1d7cb7d83bfe8d06af47317e552b1e7ef8819b0f39a4c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:45:57.691655 env[1478]: time="2025-07-12T00:45:57.691600392Z" level=info msg="CreateContainer within sandbox \"106accaea8830b3910d1d7cb7d83bfe8d06af47317e552b1e7ef8819b0f39a4c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4effeb059d1fbb9c9179f6716de633a3b61ee3138b792227017c2c69ce67cf33\"" Jul 12 00:45:57.692487 env[1478]: time="2025-07-12T00:45:57.692459363Z" level=info msg="StartContainer for \"4effeb059d1fbb9c9179f6716de633a3b61ee3138b792227017c2c69ce67cf33\"" Jul 12 00:45:57.708430 systemd[1]: Started cri-containerd-4effeb059d1fbb9c9179f6716de633a3b61ee3138b792227017c2c69ce67cf33.scope. Jul 12 00:45:57.721425 systemd[1]: cri-containerd-4effeb059d1fbb9c9179f6716de633a3b61ee3138b792227017c2c69ce67cf33.scope: Deactivated successfully. Jul 12 00:45:57.806568 env[1478]: time="2025-07-12T00:45:57.806424476Z" level=info msg="shim disconnected" id=4effeb059d1fbb9c9179f6716de633a3b61ee3138b792227017c2c69ce67cf33 Jul 12 00:45:57.806782 env[1478]: time="2025-07-12T00:45:57.806762241Z" level=warning msg="cleaning up after shim disconnected" id=4effeb059d1fbb9c9179f6716de633a3b61ee3138b792227017c2c69ce67cf33 namespace=k8s.io Jul 12 00:45:57.806847 env[1478]: time="2025-07-12T00:45:57.806834041Z" level=info msg="cleaning up dead shim" Jul 12 00:45:57.814519 env[1478]: time="2025-07-12T00:45:57.814468175Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:45:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4240 runtime=io.containerd.runc.v2\ntime=\"2025-07-12T00:45:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4effeb059d1fbb9c9179f6716de633a3b61ee3138b792227017c2c69ce67cf33/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 12 00:45:57.815008 env[1478]: time="2025-07-12T00:45:57.814905780Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Jul 12 00:45:57.815186 env[1478]: time="2025-07-12T00:45:57.815141863Z" level=error msg="Failed to pipe stdout of container \"4effeb059d1fbb9c9179f6716de633a3b61ee3138b792227017c2c69ce67cf33\"" error="reading from a closed fifo" Jul 12 00:45:57.815331 env[1478]: time="2025-07-12T00:45:57.815299465Z" level=error msg="Failed to pipe stderr of container \"4effeb059d1fbb9c9179f6716de633a3b61ee3138b792227017c2c69ce67cf33\"" error="reading from a closed fifo" Jul 12 00:45:57.822016 env[1478]: time="2025-07-12T00:45:57.821929346Z" level=error msg="StartContainer for \"4effeb059d1fbb9c9179f6716de633a3b61ee3138b792227017c2c69ce67cf33\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 12 00:45:57.822835 kubelet[2435]: E0712 00:45:57.822358 2435 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4effeb059d1fbb9c9179f6716de633a3b61ee3138b792227017c2c69ce67cf33" Jul 12 00:45:57.822835 kubelet[2435]: E0712 00:45:57.822557 2435 kuberuntime_manager.go:1341] "Unhandled Error" err=< Jul 12 00:45:57.822835 kubelet[2435]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 12 00:45:57.822835 kubelet[2435]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 12 00:45:57.822835 kubelet[2435]: rm /hostbin/cilium-mount Jul 12 00:45:57.823104 kubelet[2435]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-77qkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-k5mnc_kube-system(853ff129-b42f-40d1-843e-6510c076ec23): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 12 00:45:57.823104 kubelet[2435]: > logger="UnhandledError" Jul 12 00:45:57.824030 kubelet[2435]: E0712 00:45:57.823972 2435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-k5mnc" podUID="853ff129-b42f-40d1-843e-6510c076ec23" Jul 12 00:45:57.831342 sshd[4170]: Accepted publickey for core from 10.200.16.10 port 54262 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:45:57.832712 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:45:57.837329 systemd[1]: Started session-25.scope. Jul 12 00:45:57.838353 systemd-logind[1470]: New session 25 of user core. Jul 12 00:45:57.984015 env[1478]: time="2025-07-12T00:45:57.983972528Z" level=info msg="CreateContainer within sandbox \"106accaea8830b3910d1d7cb7d83bfe8d06af47317e552b1e7ef8819b0f39a4c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Jul 12 00:45:58.032356 env[1478]: time="2025-07-12T00:45:58.032298790Z" level=info msg="CreateContainer within sandbox \"106accaea8830b3910d1d7cb7d83bfe8d06af47317e552b1e7ef8819b0f39a4c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"71e60ac4ce05985f9990f5e74b06b9b09d8640df4858ca9aef9d0e2414a20801\"" Jul 12 00:45:58.033643 env[1478]: time="2025-07-12T00:45:58.032891757Z" level=info msg="StartContainer for \"71e60ac4ce05985f9990f5e74b06b9b09d8640df4858ca9aef9d0e2414a20801\"" Jul 12 00:45:58.048632 systemd[1]: Started cri-containerd-71e60ac4ce05985f9990f5e74b06b9b09d8640df4858ca9aef9d0e2414a20801.scope. Jul 12 00:45:58.059816 systemd[1]: cri-containerd-71e60ac4ce05985f9990f5e74b06b9b09d8640df4858ca9aef9d0e2414a20801.scope: Deactivated successfully. Jul 12 00:45:58.085133 env[1478]: time="2025-07-12T00:45:58.085071061Z" level=info msg="shim disconnected" id=71e60ac4ce05985f9990f5e74b06b9b09d8640df4858ca9aef9d0e2414a20801 Jul 12 00:45:58.085133 env[1478]: time="2025-07-12T00:45:58.085128621Z" level=warning msg="cleaning up after shim disconnected" id=71e60ac4ce05985f9990f5e74b06b9b09d8640df4858ca9aef9d0e2414a20801 namespace=k8s.io Jul 12 00:45:58.085133 env[1478]: time="2025-07-12T00:45:58.085138782Z" level=info msg="cleaning up dead shim" Jul 12 00:45:58.092389 env[1478]: time="2025-07-12T00:45:58.092328227Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:45:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4281 runtime=io.containerd.runc.v2\ntime=\"2025-07-12T00:45:58Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/71e60ac4ce05985f9990f5e74b06b9b09d8640df4858ca9aef9d0e2414a20801/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 12 00:45:58.092675 env[1478]: time="2025-07-12T00:45:58.092611471Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Jul 12 00:45:58.094365 env[1478]: time="2025-07-12T00:45:58.094320851Z" level=error msg="Failed to pipe stdout of container \"71e60ac4ce05985f9990f5e74b06b9b09d8640df4858ca9aef9d0e2414a20801\"" error="reading from a closed fifo" Jul 12 00:45:58.094432 env[1478]: time="2025-07-12T00:45:58.094397332Z" level=error msg="Failed to pipe stderr of container \"71e60ac4ce05985f9990f5e74b06b9b09d8640df4858ca9aef9d0e2414a20801\"" error="reading from a closed fifo" Jul 12 00:45:58.100802 env[1478]: time="2025-07-12T00:45:58.100687327Z" level=error msg="StartContainer for \"71e60ac4ce05985f9990f5e74b06b9b09d8640df4858ca9aef9d0e2414a20801\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 12 00:45:58.101599 kubelet[2435]: E0712 00:45:58.101069 2435 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="71e60ac4ce05985f9990f5e74b06b9b09d8640df4858ca9aef9d0e2414a20801" Jul 12 00:45:58.101599 kubelet[2435]: E0712 00:45:58.101215 2435 kuberuntime_manager.go:1341] "Unhandled Error" err=< Jul 12 00:45:58.101599 kubelet[2435]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 12 00:45:58.101599 kubelet[2435]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 12 00:45:58.101599 kubelet[2435]: rm /hostbin/cilium-mount Jul 12 00:45:58.101829 kubelet[2435]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-77qkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-k5mnc_kube-system(853ff129-b42f-40d1-843e-6510c076ec23): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 12 00:45:58.101829 kubelet[2435]: > logger="UnhandledError" Jul 12 00:45:58.104588 kubelet[2435]: E0712 00:45:58.103196 2435 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-k5mnc" podUID="853ff129-b42f-40d1-843e-6510c076ec23" Jul 12 00:45:58.272049 sshd[4170]: pam_unix(sshd:session): session closed for user core Jul 12 00:45:58.276333 systemd[1]: sshd@22-10.200.20.11:22-10.200.16.10:54262.service: Deactivated successfully. Jul 12 00:45:58.277133 systemd[1]: session-25.scope: Deactivated successfully. Jul 12 00:45:58.277759 systemd-logind[1470]: Session 25 logged out. Waiting for processes to exit. Jul 12 00:45:58.278871 systemd-logind[1470]: Removed session 25. Jul 12 00:45:58.347854 systemd[1]: Started sshd@23-10.200.20.11:22-10.200.16.10:54266.service. Jul 12 00:45:58.797703 sshd[4302]: Accepted publickey for core from 10.200.16.10 port 54266 ssh2: RSA SHA256:SOlQyDmjarbJ4MOMV5SAvhRXIiv8fyoiGui8HeDAF/4 Jul 12 00:45:58.799393 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:45:58.803320 systemd-logind[1470]: New session 26 of user core. Jul 12 00:45:58.804150 systemd[1]: Started session-26.scope. Jul 12 00:45:58.982860 kubelet[2435]: I0712 00:45:58.982821 2435 scope.go:117] "RemoveContainer" containerID="4effeb059d1fbb9c9179f6716de633a3b61ee3138b792227017c2c69ce67cf33" Jul 12 00:45:58.986831 env[1478]: time="2025-07-12T00:45:58.983982564Z" level=info msg="StopPodSandbox for \"106accaea8830b3910d1d7cb7d83bfe8d06af47317e552b1e7ef8819b0f39a4c\"" Jul 12 00:45:58.986831 env[1478]: time="2025-07-12T00:45:58.984041445Z" level=info msg="Container to stop \"4effeb059d1fbb9c9179f6716de633a3b61ee3138b792227017c2c69ce67cf33\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:45:58.986831 env[1478]: time="2025-07-12T00:45:58.984055245Z" level=info msg="Container to stop \"71e60ac4ce05985f9990f5e74b06b9b09d8640df4858ca9aef9d0e2414a20801\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:45:58.986184 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-106accaea8830b3910d1d7cb7d83bfe8d06af47317e552b1e7ef8819b0f39a4c-shm.mount: Deactivated successfully. Jul 12 00:45:58.987574 env[1478]: time="2025-07-12T00:45:58.987527606Z" level=info msg="RemoveContainer for \"4effeb059d1fbb9c9179f6716de633a3b61ee3138b792227017c2c69ce67cf33\"" Jul 12 00:45:59.003343 systemd[1]: cri-containerd-106accaea8830b3910d1d7cb7d83bfe8d06af47317e552b1e7ef8819b0f39a4c.scope: Deactivated successfully. Jul 12 00:45:59.005617 env[1478]: time="2025-07-12T00:45:59.005576341Z" level=info msg="RemoveContainer for \"4effeb059d1fbb9c9179f6716de633a3b61ee3138b792227017c2c69ce67cf33\" returns successfully" Jul 12 00:45:59.030521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-106accaea8830b3910d1d7cb7d83bfe8d06af47317e552b1e7ef8819b0f39a4c-rootfs.mount: Deactivated successfully. Jul 12 00:45:59.063932 env[1478]: time="2025-07-12T00:45:59.063812221Z" level=info msg="shim disconnected" id=106accaea8830b3910d1d7cb7d83bfe8d06af47317e552b1e7ef8819b0f39a4c Jul 12 00:45:59.064235 env[1478]: time="2025-07-12T00:45:59.064213145Z" level=warning msg="cleaning up after shim disconnected" id=106accaea8830b3910d1d7cb7d83bfe8d06af47317e552b1e7ef8819b0f39a4c namespace=k8s.io Jul 12 00:45:59.064330 env[1478]: time="2025-07-12T00:45:59.064315226Z" level=info msg="cleaning up dead shim" Jul 12 00:45:59.073399 env[1478]: time="2025-07-12T00:45:59.073357852Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:45:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4328 runtime=io.containerd.runc.v2\n" Jul 12 00:45:59.073840 env[1478]: time="2025-07-12T00:45:59.073810977Z" level=info msg="TearDown network for sandbox \"106accaea8830b3910d1d7cb7d83bfe8d06af47317e552b1e7ef8819b0f39a4c\" successfully" Jul 12 00:45:59.073938 env[1478]: time="2025-07-12T00:45:59.073919699Z" level=info msg="StopPodSandbox for \"106accaea8830b3910d1d7cb7d83bfe8d06af47317e552b1e7ef8819b0f39a4c\" returns successfully" Jul 12 00:45:59.141266 kubelet[2435]: I0712 00:45:59.141160 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/853ff129-b42f-40d1-843e-6510c076ec23-hubble-tls\") pod \"853ff129-b42f-40d1-843e-6510c076ec23\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " Jul 12 00:45:59.141266 kubelet[2435]: I0712 00:45:59.141203 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-cilium-run\") pod \"853ff129-b42f-40d1-843e-6510c076ec23\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " Jul 12 00:45:59.141266 kubelet[2435]: I0712 00:45:59.141223 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-hostproc\") pod \"853ff129-b42f-40d1-843e-6510c076ec23\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " Jul 12 00:45:59.141266 kubelet[2435]: I0712 00:45:59.141245 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/853ff129-b42f-40d1-843e-6510c076ec23-clustermesh-secrets\") pod \"853ff129-b42f-40d1-843e-6510c076ec23\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " Jul 12 00:45:59.141266 kubelet[2435]: I0712 00:45:59.141287 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77qkb\" (UniqueName: \"kubernetes.io/projected/853ff129-b42f-40d1-843e-6510c076ec23-kube-api-access-77qkb\") pod \"853ff129-b42f-40d1-843e-6510c076ec23\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " Jul 12 00:45:59.141683 kubelet[2435]: I0712 00:45:59.141312 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-host-proc-sys-net\") pod \"853ff129-b42f-40d1-843e-6510c076ec23\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " Jul 12 00:45:59.141683 kubelet[2435]: I0712 00:45:59.141330 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-etc-cni-netd\") pod \"853ff129-b42f-40d1-843e-6510c076ec23\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " Jul 12 00:45:59.141683 kubelet[2435]: I0712 00:45:59.141348 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-cilium-cgroup\") pod \"853ff129-b42f-40d1-843e-6510c076ec23\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " Jul 12 00:45:59.141683 kubelet[2435]: I0712 00:45:59.141367 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-xtables-lock\") pod \"853ff129-b42f-40d1-843e-6510c076ec23\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " Jul 12 00:45:59.141683 kubelet[2435]: I0712 00:45:59.141386 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/853ff129-b42f-40d1-843e-6510c076ec23-cilium-config-path\") pod \"853ff129-b42f-40d1-843e-6510c076ec23\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " Jul 12 00:45:59.141683 kubelet[2435]: I0712 00:45:59.141407 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-bpf-maps\") pod \"853ff129-b42f-40d1-843e-6510c076ec23\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " Jul 12 00:45:59.141852 kubelet[2435]: I0712 00:45:59.141425 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/853ff129-b42f-40d1-843e-6510c076ec23-cilium-ipsec-secrets\") pod \"853ff129-b42f-40d1-843e-6510c076ec23\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " Jul 12 00:45:59.141852 kubelet[2435]: I0712 00:45:59.141444 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-lib-modules\") pod \"853ff129-b42f-40d1-843e-6510c076ec23\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " Jul 12 00:45:59.141852 kubelet[2435]: I0712 00:45:59.141461 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-cni-path\") pod \"853ff129-b42f-40d1-843e-6510c076ec23\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " Jul 12 00:45:59.141852 kubelet[2435]: I0712 00:45:59.141476 2435 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-host-proc-sys-kernel\") pod \"853ff129-b42f-40d1-843e-6510c076ec23\" (UID: \"853ff129-b42f-40d1-843e-6510c076ec23\") " Jul 12 00:45:59.141852 kubelet[2435]: I0712 00:45:59.141535 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "853ff129-b42f-40d1-843e-6510c076ec23" (UID: "853ff129-b42f-40d1-843e-6510c076ec23"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:59.141975 kubelet[2435]: I0712 00:45:59.141563 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "853ff129-b42f-40d1-843e-6510c076ec23" (UID: "853ff129-b42f-40d1-843e-6510c076ec23"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:59.141975 kubelet[2435]: I0712 00:45:59.141578 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-hostproc" (OuterVolumeSpecName: "hostproc") pod "853ff129-b42f-40d1-843e-6510c076ec23" (UID: "853ff129-b42f-40d1-843e-6510c076ec23"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:59.146618 systemd[1]: var-lib-kubelet-pods-853ff129\x2db42f\x2d40d1\x2d843e\x2d6510c076ec23-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:45:59.147616 kubelet[2435]: I0712 00:45:59.142591 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "853ff129-b42f-40d1-843e-6510c076ec23" (UID: "853ff129-b42f-40d1-843e-6510c076ec23"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:59.148525 kubelet[2435]: I0712 00:45:59.144653 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/853ff129-b42f-40d1-843e-6510c076ec23-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "853ff129-b42f-40d1-843e-6510c076ec23" (UID: "853ff129-b42f-40d1-843e-6510c076ec23"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:45:59.148525 kubelet[2435]: I0712 00:45:59.144684 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "853ff129-b42f-40d1-843e-6510c076ec23" (UID: "853ff129-b42f-40d1-843e-6510c076ec23"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:59.148525 kubelet[2435]: I0712 00:45:59.148413 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "853ff129-b42f-40d1-843e-6510c076ec23" (UID: "853ff129-b42f-40d1-843e-6510c076ec23"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:59.148525 kubelet[2435]: I0712 00:45:59.148437 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-cni-path" (OuterVolumeSpecName: "cni-path") pod "853ff129-b42f-40d1-843e-6510c076ec23" (UID: "853ff129-b42f-40d1-843e-6510c076ec23"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:59.148875 kubelet[2435]: I0712 00:45:59.148853 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "853ff129-b42f-40d1-843e-6510c076ec23" (UID: "853ff129-b42f-40d1-843e-6510c076ec23"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:59.148996 kubelet[2435]: I0712 00:45:59.148982 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "853ff129-b42f-40d1-843e-6510c076ec23" (UID: "853ff129-b42f-40d1-843e-6510c076ec23"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:59.149094 kubelet[2435]: I0712 00:45:59.149081 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "853ff129-b42f-40d1-843e-6510c076ec23" (UID: "853ff129-b42f-40d1-843e-6510c076ec23"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:45:59.152487 systemd[1]: var-lib-kubelet-pods-853ff129\x2db42f\x2d40d1\x2d843e\x2d6510c076ec23-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 12 00:45:59.153425 kubelet[2435]: I0712 00:45:59.153389 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/853ff129-b42f-40d1-843e-6510c076ec23-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "853ff129-b42f-40d1-843e-6510c076ec23" (UID: "853ff129-b42f-40d1-843e-6510c076ec23"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:45:59.153538 kubelet[2435]: I0712 00:45:59.153387 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853ff129-b42f-40d1-843e-6510c076ec23-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "853ff129-b42f-40d1-843e-6510c076ec23" (UID: "853ff129-b42f-40d1-843e-6510c076ec23"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:45:59.154470 kubelet[2435]: I0712 00:45:59.154439 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/853ff129-b42f-40d1-843e-6510c076ec23-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "853ff129-b42f-40d1-843e-6510c076ec23" (UID: "853ff129-b42f-40d1-843e-6510c076ec23"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:45:59.157042 kubelet[2435]: I0712 00:45:59.156975 2435 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853ff129-b42f-40d1-843e-6510c076ec23-kube-api-access-77qkb" (OuterVolumeSpecName: "kube-api-access-77qkb") pod "853ff129-b42f-40d1-843e-6510c076ec23" (UID: "853ff129-b42f-40d1-843e-6510c076ec23"). InnerVolumeSpecName "kube-api-access-77qkb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:45:59.242279 kubelet[2435]: I0712 00:45:59.242222 2435 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/853ff129-b42f-40d1-843e-6510c076ec23-hubble-tls\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:59.242279 kubelet[2435]: I0712 00:45:59.242258 2435 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-cilium-run\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:59.242279 kubelet[2435]: I0712 00:45:59.242278 2435 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-hostproc\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:59.242279 kubelet[2435]: I0712 00:45:59.242287 2435 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/853ff129-b42f-40d1-843e-6510c076ec23-clustermesh-secrets\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:59.242523 kubelet[2435]: I0712 00:45:59.242300 2435 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-77qkb\" (UniqueName: \"kubernetes.io/projected/853ff129-b42f-40d1-843e-6510c076ec23-kube-api-access-77qkb\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:59.242523 kubelet[2435]: I0712 00:45:59.242310 2435 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-host-proc-sys-net\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:59.242523 kubelet[2435]: I0712 00:45:59.242321 2435 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-etc-cni-netd\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:59.242523 kubelet[2435]: I0712 00:45:59.242329 2435 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-cilium-cgroup\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:59.242523 kubelet[2435]: I0712 00:45:59.242337 2435 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-xtables-lock\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:59.242523 kubelet[2435]: I0712 00:45:59.242345 2435 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/853ff129-b42f-40d1-843e-6510c076ec23-cilium-config-path\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:59.242523 kubelet[2435]: I0712 00:45:59.242353 2435 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-bpf-maps\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:59.242523 kubelet[2435]: I0712 00:45:59.242363 2435 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/853ff129-b42f-40d1-843e-6510c076ec23-cilium-ipsec-secrets\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:59.242707 kubelet[2435]: I0712 00:45:59.242372 2435 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-lib-modules\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:59.242707 kubelet[2435]: I0712 00:45:59.242380 2435 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-cni-path\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:59.242707 kubelet[2435]: I0712 00:45:59.242387 2435 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/853ff129-b42f-40d1-843e-6510c076ec23-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-2c4241d00d\" DevicePath \"\"" Jul 12 00:45:59.436463 systemd[1]: var-lib-kubelet-pods-853ff129\x2db42f\x2d40d1\x2d843e\x2d6510c076ec23-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d77qkb.mount: Deactivated successfully. Jul 12 00:45:59.436562 systemd[1]: var-lib-kubelet-pods-853ff129\x2db42f\x2d40d1\x2d843e\x2d6510c076ec23-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:45:59.553361 systemd[1]: Removed slice kubepods-burstable-pod853ff129_b42f_40d1_843e_6510c076ec23.slice. Jul 12 00:45:59.669693 kubelet[2435]: E0712 00:45:59.669659 2435 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:45:59.986418 kubelet[2435]: I0712 00:45:59.986390 2435 scope.go:117] "RemoveContainer" containerID="71e60ac4ce05985f9990f5e74b06b9b09d8640df4858ca9aef9d0e2414a20801" Jul 12 00:45:59.989631 env[1478]: time="2025-07-12T00:45:59.989586590Z" level=info msg="RemoveContainer for \"71e60ac4ce05985f9990f5e74b06b9b09d8640df4858ca9aef9d0e2414a20801\"" Jul 12 00:46:00.008286 env[1478]: time="2025-07-12T00:46:00.008214526Z" level=info msg="RemoveContainer for \"71e60ac4ce05985f9990f5e74b06b9b09d8640df4858ca9aef9d0e2414a20801\" returns successfully" Jul 12 00:46:00.052356 kubelet[2435]: I0712 00:46:00.052316 2435 memory_manager.go:355] "RemoveStaleState removing state" podUID="853ff129-b42f-40d1-843e-6510c076ec23" containerName="mount-cgroup" Jul 12 00:46:00.052593 kubelet[2435]: I0712 00:46:00.052578 2435 memory_manager.go:355] "RemoveStaleState removing state" podUID="853ff129-b42f-40d1-843e-6510c076ec23" containerName="mount-cgroup" Jul 12 00:46:00.058316 systemd[1]: Created slice kubepods-burstable-pod63e2a8e4_e46c_4fab_8549_e856eb462f9f.slice. Jul 12 00:46:00.061134 kubelet[2435]: I0712 00:46:00.061091 2435 status_manager.go:890] "Failed to get status for pod" podUID="63e2a8e4-e46c-4fab-8549-e856eb462f9f" pod="kube-system/cilium-8529f" err="pods \"cilium-8529f\" is forbidden: User \"system:node:ci-3510.3.7-n-2c4241d00d\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-2c4241d00d' and this object" Jul 12 00:46:00.061323 kubelet[2435]: W0712 00:46:00.061110 2435 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.7-n-2c4241d00d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-n-2c4241d00d' and this object Jul 12 00:46:00.061436 kubelet[2435]: E0712 00:46:00.061413 2435 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-3510.3.7-n-2c4241d00d\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-2c4241d00d' and this object" logger="UnhandledError" Jul 12 00:46:00.061510 kubelet[2435]: W0712 00:46:00.061254 2435 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.7-n-2c4241d00d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-n-2c4241d00d' and this object Jul 12 00:46:00.061589 kubelet[2435]: E0712 00:46:00.061574 2435 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-3510.3.7-n-2c4241d00d\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-2c4241d00d' and this object" logger="UnhandledError" Jul 12 00:46:00.062444 kubelet[2435]: W0712 00:46:00.062419 2435 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.7-n-2c4241d00d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-n-2c4241d00d' and this object Jul 12 00:46:00.062630 kubelet[2435]: E0712 00:46:00.062585 2435 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-3510.3.7-n-2c4241d00d\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-2c4241d00d' and this object" logger="UnhandledError" Jul 12 00:46:00.146589 kubelet[2435]: I0712 00:46:00.146547 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63e2a8e4-e46c-4fab-8549-e856eb462f9f-lib-modules\") pod \"cilium-8529f\" (UID: \"63e2a8e4-e46c-4fab-8549-e856eb462f9f\") " pod="kube-system/cilium-8529f" Jul 12 00:46:00.146859 kubelet[2435]: I0712 00:46:00.146841 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/63e2a8e4-e46c-4fab-8549-e856eb462f9f-cilium-config-path\") pod \"cilium-8529f\" (UID: \"63e2a8e4-e46c-4fab-8549-e856eb462f9f\") " pod="kube-system/cilium-8529f" Jul 12 00:46:00.146964 kubelet[2435]: I0712 00:46:00.146951 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/63e2a8e4-e46c-4fab-8549-e856eb462f9f-etc-cni-netd\") pod \"cilium-8529f\" (UID: \"63e2a8e4-e46c-4fab-8549-e856eb462f9f\") " pod="kube-system/cilium-8529f" Jul 12 00:46:00.147055 kubelet[2435]: I0712 00:46:00.147044 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63e2a8e4-e46c-4fab-8549-e856eb462f9f-xtables-lock\") pod \"cilium-8529f\" (UID: \"63e2a8e4-e46c-4fab-8549-e856eb462f9f\") " pod="kube-system/cilium-8529f" Jul 12 00:46:00.147154 kubelet[2435]: I0712 00:46:00.147140 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztmtd\" (UniqueName: \"kubernetes.io/projected/63e2a8e4-e46c-4fab-8549-e856eb462f9f-kube-api-access-ztmtd\") pod \"cilium-8529f\" (UID: \"63e2a8e4-e46c-4fab-8549-e856eb462f9f\") " pod="kube-system/cilium-8529f" Jul 12 00:46:00.147249 kubelet[2435]: I0712 00:46:00.147237 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/63e2a8e4-e46c-4fab-8549-e856eb462f9f-host-proc-sys-net\") pod \"cilium-8529f\" (UID: \"63e2a8e4-e46c-4fab-8549-e856eb462f9f\") " pod="kube-system/cilium-8529f" Jul 12 00:46:00.147382 kubelet[2435]: I0712 00:46:00.147366 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/63e2a8e4-e46c-4fab-8549-e856eb462f9f-hubble-tls\") pod \"cilium-8529f\" (UID: \"63e2a8e4-e46c-4fab-8549-e856eb462f9f\") " pod="kube-system/cilium-8529f" Jul 12 00:46:00.147475 kubelet[2435]: I0712 00:46:00.147463 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/63e2a8e4-e46c-4fab-8549-e856eb462f9f-cilium-run\") pod \"cilium-8529f\" (UID: \"63e2a8e4-e46c-4fab-8549-e856eb462f9f\") " pod="kube-system/cilium-8529f" Jul 12 00:46:00.147563 kubelet[2435]: I0712 00:46:00.147551 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/63e2a8e4-e46c-4fab-8549-e856eb462f9f-bpf-maps\") pod \"cilium-8529f\" (UID: \"63e2a8e4-e46c-4fab-8549-e856eb462f9f\") " pod="kube-system/cilium-8529f" Jul 12 00:46:00.147653 kubelet[2435]: I0712 00:46:00.147642 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/63e2a8e4-e46c-4fab-8549-e856eb462f9f-hostproc\") pod \"cilium-8529f\" (UID: \"63e2a8e4-e46c-4fab-8549-e856eb462f9f\") " pod="kube-system/cilium-8529f" Jul 12 00:46:00.147749 kubelet[2435]: I0712 00:46:00.147737 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/63e2a8e4-e46c-4fab-8549-e856eb462f9f-cilium-cgroup\") pod \"cilium-8529f\" (UID: \"63e2a8e4-e46c-4fab-8549-e856eb462f9f\") " pod="kube-system/cilium-8529f" Jul 12 00:46:00.147850 kubelet[2435]: I0712 00:46:00.147838 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/63e2a8e4-e46c-4fab-8549-e856eb462f9f-cni-path\") pod \"cilium-8529f\" (UID: \"63e2a8e4-e46c-4fab-8549-e856eb462f9f\") " pod="kube-system/cilium-8529f" Jul 12 00:46:00.147955 kubelet[2435]: I0712 00:46:00.147939 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/63e2a8e4-e46c-4fab-8549-e856eb462f9f-cilium-ipsec-secrets\") pod \"cilium-8529f\" (UID: \"63e2a8e4-e46c-4fab-8549-e856eb462f9f\") " pod="kube-system/cilium-8529f" Jul 12 00:46:00.148059 kubelet[2435]: I0712 00:46:00.148046 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/63e2a8e4-e46c-4fab-8549-e856eb462f9f-host-proc-sys-kernel\") pod \"cilium-8529f\" (UID: \"63e2a8e4-e46c-4fab-8549-e856eb462f9f\") " pod="kube-system/cilium-8529f" Jul 12 00:46:00.148169 kubelet[2435]: I0712 00:46:00.148156 2435 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/63e2a8e4-e46c-4fab-8549-e856eb462f9f-clustermesh-secrets\") pod \"cilium-8529f\" (UID: \"63e2a8e4-e46c-4fab-8549-e856eb462f9f\") " pod="kube-system/cilium-8529f" Jul 12 00:46:00.912153 kubelet[2435]: W0712 00:46:00.912107 2435 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod853ff129_b42f_40d1_843e_6510c076ec23.slice/cri-containerd-4effeb059d1fbb9c9179f6716de633a3b61ee3138b792227017c2c69ce67cf33.scope WatchSource:0}: container "4effeb059d1fbb9c9179f6716de633a3b61ee3138b792227017c2c69ce67cf33" in namespace "k8s.io": not found Jul 12 00:46:01.249713 kubelet[2435]: E0712 00:46:01.249580 2435 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jul 12 00:46:01.249713 kubelet[2435]: E0712 00:46:01.249691 2435 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63e2a8e4-e46c-4fab-8549-e856eb462f9f-clustermesh-secrets podName:63e2a8e4-e46c-4fab-8549-e856eb462f9f nodeName:}" failed. No retries permitted until 2025-07-12 00:46:01.74966722 +0000 UTC m=+192.311852031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/63e2a8e4-e46c-4fab-8549-e856eb462f9f-clustermesh-secrets") pod "cilium-8529f" (UID: "63e2a8e4-e46c-4fab-8549-e856eb462f9f") : failed to sync secret cache: timed out waiting for the condition Jul 12 00:46:01.250099 kubelet[2435]: E0712 00:46:01.249580 2435 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jul 12 00:46:01.250099 kubelet[2435]: E0712 00:46:01.249978 2435 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63e2a8e4-e46c-4fab-8549-e856eb462f9f-cilium-ipsec-secrets podName:63e2a8e4-e46c-4fab-8549-e856eb462f9f nodeName:}" failed. No retries permitted until 2025-07-12 00:46:01.749965664 +0000 UTC m=+192.312150475 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/63e2a8e4-e46c-4fab-8549-e856eb462f9f-cilium-ipsec-secrets") pod "cilium-8529f" (UID: "63e2a8e4-e46c-4fab-8549-e856eb462f9f") : failed to sync secret cache: timed out waiting for the condition Jul 12 00:46:01.550644 kubelet[2435]: I0712 00:46:01.550588 2435 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="853ff129-b42f-40d1-843e-6510c076ec23" path="/var/lib/kubelet/pods/853ff129-b42f-40d1-843e-6510c076ec23/volumes" Jul 12 00:46:01.862026 env[1478]: time="2025-07-12T00:46:01.861445715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8529f,Uid:63e2a8e4-e46c-4fab-8549-e856eb462f9f,Namespace:kube-system,Attempt:0,}" Jul 12 00:46:01.911764 env[1478]: time="2025-07-12T00:46:01.911672195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:46:01.911927 env[1478]: time="2025-07-12T00:46:01.911774596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:46:01.911927 env[1478]: time="2025-07-12T00:46:01.911802036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:46:01.912085 env[1478]: time="2025-07-12T00:46:01.912040439Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6aff45a16fdfa34e322ac9e008be0153536b6fd26d5f3b1bade812fb22c82068 pid=4358 runtime=io.containerd.runc.v2 Jul 12 00:46:01.933940 systemd[1]: Started cri-containerd-6aff45a16fdfa34e322ac9e008be0153536b6fd26d5f3b1bade812fb22c82068.scope. Jul 12 00:46:01.961148 env[1478]: time="2025-07-12T00:46:01.961072065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8529f,Uid:63e2a8e4-e46c-4fab-8549-e856eb462f9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6aff45a16fdfa34e322ac9e008be0153536b6fd26d5f3b1bade812fb22c82068\"" Jul 12 00:46:01.965584 env[1478]: time="2025-07-12T00:46:01.965540955Z" level=info msg="CreateContainer within sandbox \"6aff45a16fdfa34e322ac9e008be0153536b6fd26d5f3b1bade812fb22c82068\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:46:02.014259 env[1478]: time="2025-07-12T00:46:02.014200214Z" level=info msg="CreateContainer within sandbox \"6aff45a16fdfa34e322ac9e008be0153536b6fd26d5f3b1bade812fb22c82068\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dff66275264e40d42e7f4d1a1086779f05653cb1ec049042aba80db301c67c88\"" Jul 12 00:46:02.016693 env[1478]: time="2025-07-12T00:46:02.016517039Z" level=info msg="StartContainer for \"dff66275264e40d42e7f4d1a1086779f05653cb1ec049042aba80db301c67c88\"" Jul 12 00:46:02.033053 systemd[1]: Started cri-containerd-dff66275264e40d42e7f4d1a1086779f05653cb1ec049042aba80db301c67c88.scope. Jul 12 00:46:02.070488 env[1478]: time="2025-07-12T00:46:02.070397025Z" level=info msg="StartContainer for \"dff66275264e40d42e7f4d1a1086779f05653cb1ec049042aba80db301c67c88\" returns successfully" Jul 12 00:46:02.074941 systemd[1]: cri-containerd-dff66275264e40d42e7f4d1a1086779f05653cb1ec049042aba80db301c67c88.scope: Deactivated successfully. Jul 12 00:46:02.119032 env[1478]: time="2025-07-12T00:46:02.118910393Z" level=info msg="shim disconnected" id=dff66275264e40d42e7f4d1a1086779f05653cb1ec049042aba80db301c67c88 Jul 12 00:46:02.119493 env[1478]: time="2025-07-12T00:46:02.119466719Z" level=warning msg="cleaning up after shim disconnected" id=dff66275264e40d42e7f4d1a1086779f05653cb1ec049042aba80db301c67c88 namespace=k8s.io Jul 12 00:46:02.119601 env[1478]: time="2025-07-12T00:46:02.119581920Z" level=info msg="cleaning up dead shim" Jul 12 00:46:02.127658 env[1478]: time="2025-07-12T00:46:02.127610207Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:46:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4446 runtime=io.containerd.runc.v2\n" Jul 12 00:46:02.765077 systemd[1]: run-containerd-runc-k8s.io-6aff45a16fdfa34e322ac9e008be0153536b6fd26d5f3b1bade812fb22c82068-runc.eNsTzH.mount: Deactivated successfully. Jul 12 00:46:03.001996 env[1478]: time="2025-07-12T00:46:03.001937718Z" level=info msg="CreateContainer within sandbox \"6aff45a16fdfa34e322ac9e008be0153536b6fd26d5f3b1bade812fb22c82068\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:46:03.058142 env[1478]: time="2025-07-12T00:46:03.058025073Z" level=info msg="CreateContainer within sandbox \"6aff45a16fdfa34e322ac9e008be0153536b6fd26d5f3b1bade812fb22c82068\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"158c64a96dd1a06f20afa1f1130946ade6ea81931318baa587c892be88948d0d\"" Jul 12 00:46:03.058766 env[1478]: time="2025-07-12T00:46:03.058729641Z" level=info msg="StartContainer for \"158c64a96dd1a06f20afa1f1130946ade6ea81931318baa587c892be88948d0d\"" Jul 12 00:46:03.082789 systemd[1]: Started cri-containerd-158c64a96dd1a06f20afa1f1130946ade6ea81931318baa587c892be88948d0d.scope. Jul 12 00:46:03.114185 env[1478]: time="2025-07-12T00:46:03.114119549Z" level=info msg="StartContainer for \"158c64a96dd1a06f20afa1f1130946ade6ea81931318baa587c892be88948d0d\" returns successfully" Jul 12 00:46:03.116708 systemd[1]: cri-containerd-158c64a96dd1a06f20afa1f1130946ade6ea81931318baa587c892be88948d0d.scope: Deactivated successfully. Jul 12 00:46:03.149620 env[1478]: time="2025-07-12T00:46:03.149566766Z" level=info msg="shim disconnected" id=158c64a96dd1a06f20afa1f1130946ade6ea81931318baa587c892be88948d0d Jul 12 00:46:03.149620 env[1478]: time="2025-07-12T00:46:03.149613086Z" level=warning msg="cleaning up after shim disconnected" id=158c64a96dd1a06f20afa1f1130946ade6ea81931318baa587c892be88948d0d namespace=k8s.io Jul 12 00:46:03.149620 env[1478]: time="2025-07-12T00:46:03.149623286Z" level=info msg="cleaning up dead shim" Jul 12 00:46:03.156951 env[1478]: time="2025-07-12T00:46:03.156899124Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:46:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4507 runtime=io.containerd.runc.v2\n" Jul 12 00:46:03.765146 systemd[1]: run-containerd-runc-k8s.io-158c64a96dd1a06f20afa1f1130946ade6ea81931318baa587c892be88948d0d-runc.FZ9a8b.mount: Deactivated successfully. Jul 12 00:46:03.765246 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-158c64a96dd1a06f20afa1f1130946ade6ea81931318baa587c892be88948d0d-rootfs.mount: Deactivated successfully. Jul 12 00:46:04.004729 env[1478]: time="2025-07-12T00:46:04.004669566Z" level=info msg="CreateContainer within sandbox \"6aff45a16fdfa34e322ac9e008be0153536b6fd26d5f3b1bade812fb22c82068\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:46:04.061023 env[1478]: time="2025-07-12T00:46:04.060825028Z" level=info msg="CreateContainer within sandbox \"6aff45a16fdfa34e322ac9e008be0153536b6fd26d5f3b1bade812fb22c82068\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"22420a8a9d74ca814185779d73861a0c872339d91533094bdf659141c127483f\"" Jul 12 00:46:04.062362 env[1478]: time="2025-07-12T00:46:04.062267443Z" level=info msg="StartContainer for \"22420a8a9d74ca814185779d73861a0c872339d91533094bdf659141c127483f\"" Jul 12 00:46:04.103337 systemd[1]: Started cri-containerd-22420a8a9d74ca814185779d73861a0c872339d91533094bdf659141c127483f.scope. Jul 12 00:46:04.149339 env[1478]: time="2025-07-12T00:46:04.149289865Z" level=info msg="StartContainer for \"22420a8a9d74ca814185779d73861a0c872339d91533094bdf659141c127483f\" returns successfully" Jul 12 00:46:04.150457 systemd[1]: cri-containerd-22420a8a9d74ca814185779d73861a0c872339d91533094bdf659141c127483f.scope: Deactivated successfully. Jul 12 00:46:04.187492 env[1478]: time="2025-07-12T00:46:04.187435540Z" level=info msg="shim disconnected" id=22420a8a9d74ca814185779d73861a0c872339d91533094bdf659141c127483f Jul 12 00:46:04.187492 env[1478]: time="2025-07-12T00:46:04.187485261Z" level=warning msg="cleaning up after shim disconnected" id=22420a8a9d74ca814185779d73861a0c872339d91533094bdf659141c127483f namespace=k8s.io Jul 12 00:46:04.187492 env[1478]: time="2025-07-12T00:46:04.187495301Z" level=info msg="cleaning up dead shim" Jul 12 00:46:04.195123 env[1478]: time="2025-07-12T00:46:04.195064619Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:46:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4564 runtime=io.containerd.runc.v2\n" Jul 12 00:46:04.272150 kubelet[2435]: I0712 00:46:04.272089 2435 setters.go:602] "Node became not ready" node="ci-3510.3.7-n-2c4241d00d" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-12T00:46:04Z","lastTransitionTime":"2025-07-12T00:46:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 12 00:46:04.671405 kubelet[2435]: E0712 00:46:04.671358 2435 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:46:04.765195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22420a8a9d74ca814185779d73861a0c872339d91533094bdf659141c127483f-rootfs.mount: Deactivated successfully. Jul 12 00:46:05.008565 env[1478]: time="2025-07-12T00:46:05.008520570Z" level=info msg="CreateContainer within sandbox \"6aff45a16fdfa34e322ac9e008be0153536b6fd26d5f3b1bade812fb22c82068\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:46:05.055683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2300184223.mount: Deactivated successfully. Jul 12 00:46:05.081298 env[1478]: time="2025-07-12T00:46:05.081217025Z" level=info msg="CreateContainer within sandbox \"6aff45a16fdfa34e322ac9e008be0153536b6fd26d5f3b1bade812fb22c82068\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3f9686d959fc79da288683bff685fe1bed8ea8ab33e47e8e061dc5f10df39410\"" Jul 12 00:46:05.081895 env[1478]: time="2025-07-12T00:46:05.081861871Z" level=info msg="StartContainer for \"3f9686d959fc79da288683bff685fe1bed8ea8ab33e47e8e061dc5f10df39410\"" Jul 12 00:46:05.098124 systemd[1]: Started cri-containerd-3f9686d959fc79da288683bff685fe1bed8ea8ab33e47e8e061dc5f10df39410.scope. Jul 12 00:46:05.125625 systemd[1]: cri-containerd-3f9686d959fc79da288683bff685fe1bed8ea8ab33e47e8e061dc5f10df39410.scope: Deactivated successfully. Jul 12 00:46:05.128108 env[1478]: time="2025-07-12T00:46:05.128060819Z" level=info msg="StartContainer for \"3f9686d959fc79da288683bff685fe1bed8ea8ab33e47e8e061dc5f10df39410\" returns successfully" Jul 12 00:46:05.161160 env[1478]: time="2025-07-12T00:46:05.161098753Z" level=info msg="shim disconnected" id=3f9686d959fc79da288683bff685fe1bed8ea8ab33e47e8e061dc5f10df39410 Jul 12 00:46:05.161160 env[1478]: time="2025-07-12T00:46:05.161151354Z" level=warning msg="cleaning up after shim disconnected" id=3f9686d959fc79da288683bff685fe1bed8ea8ab33e47e8e061dc5f10df39410 namespace=k8s.io Jul 12 00:46:05.161160 env[1478]: time="2025-07-12T00:46:05.161160594Z" level=info msg="cleaning up dead shim" Jul 12 00:46:05.169263 env[1478]: time="2025-07-12T00:46:05.169212555Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:46:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4620 runtime=io.containerd.runc.v2\n" Jul 12 00:46:06.012497 env[1478]: time="2025-07-12T00:46:06.012452802Z" level=info msg="CreateContainer within sandbox \"6aff45a16fdfa34e322ac9e008be0153536b6fd26d5f3b1bade812fb22c82068\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:46:06.066614 env[1478]: time="2025-07-12T00:46:06.066560416Z" level=info msg="CreateContainer within sandbox \"6aff45a16fdfa34e322ac9e008be0153536b6fd26d5f3b1bade812fb22c82068\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d18abda21edba7d789fc6359bc98f0064609a9677d47d4526381d1c5840160fe\"" Jul 12 00:46:06.067400 env[1478]: time="2025-07-12T00:46:06.067330544Z" level=info msg="StartContainer for \"d18abda21edba7d789fc6359bc98f0064609a9677d47d4526381d1c5840160fe\"" Jul 12 00:46:06.089974 systemd[1]: Started cri-containerd-d18abda21edba7d789fc6359bc98f0064609a9677d47d4526381d1c5840160fe.scope. Jul 12 00:46:06.132449 env[1478]: time="2025-07-12T00:46:06.132394706Z" level=info msg="StartContainer for \"d18abda21edba7d789fc6359bc98f0064609a9677d47d4526381d1c5840160fe\" returns successfully" Jul 12 00:46:06.700319 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 12 00:46:07.370402 systemd[1]: run-containerd-runc-k8s.io-d18abda21edba7d789fc6359bc98f0064609a9677d47d4526381d1c5840160fe-runc.WJBccB.mount: Deactivated successfully. Jul 12 00:46:09.367461 systemd-networkd[1640]: lxc_health: Link UP Jul 12 00:46:09.384375 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 12 00:46:09.384517 systemd-networkd[1640]: lxc_health: Gained carrier Jul 12 00:46:09.890480 kubelet[2435]: I0712 00:46:09.890414 2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8529f" podStartSLOduration=9.890388356999999 podStartE2EDuration="9.890388357s" podCreationTimestamp="2025-07-12 00:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:46:07.038947485 +0000 UTC m=+197.601132296" watchObservedRunningTime="2025-07-12 00:46:09.890388357 +0000 UTC m=+200.452573168" Jul 12 00:46:11.304499 systemd-networkd[1640]: lxc_health: Gained IPv6LL Jul 12 00:46:11.694882 systemd[1]: run-containerd-runc-k8s.io-d18abda21edba7d789fc6359bc98f0064609a9677d47d4526381d1c5840160fe-runc.CGB8xP.mount: Deactivated successfully. Jul 12 00:46:16.090664 sshd[4302]: pam_unix(sshd:session): session closed for user core Jul 12 00:46:16.093642 systemd[1]: sshd@23-10.200.20.11:22-10.200.16.10:54266.service: Deactivated successfully. Jul 12 00:46:16.094367 systemd[1]: session-26.scope: Deactivated successfully. Jul 12 00:46:16.094989 systemd-logind[1470]: Session 26 logged out. Waiting for processes to exit. Jul 12 00:46:16.095821 systemd-logind[1470]: Removed session 26.