Apr 12 18:28:50.040381 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 12 18:28:50.040400 kernel: Linux version 5.15.154-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Apr 12 17:21:24 -00 2024 Apr 12 18:28:50.040408 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 12 18:28:50.040416 kernel: printk: bootconsole [pl11] enabled Apr 12 18:28:50.040421 kernel: efi: EFI v2.70 by EDK II Apr 12 18:28:50.040426 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef2e698 RNG=0x3fd89998 MEMRESERVE=0x37b33f98 Apr 12 18:28:50.040433 kernel: random: crng init done Apr 12 18:28:50.040438 kernel: ACPI: Early table checksum verification disabled Apr 12 18:28:50.040444 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Apr 12 18:28:50.040449 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:28:50.040454 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:28:50.040461 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Apr 12 18:28:50.040467 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:28:50.040472 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:28:50.040479 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:28:50.040484 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:28:50.040491 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:28:50.040498 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:28:50.040504 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 12 18:28:50.040510 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:28:50.040515 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 12 18:28:50.040521 kernel: NUMA: Failed to initialise from firmware Apr 12 18:28:50.040527 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Apr 12 18:28:50.040533 kernel: NUMA: NODE_DATA [mem 0x1bf7f1900-0x1bf7f6fff] Apr 12 18:28:50.040539 kernel: Zone ranges: Apr 12 18:28:50.040544 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 12 18:28:50.040550 kernel: DMA32 empty Apr 12 18:28:50.040557 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 12 18:28:50.040562 kernel: Movable zone start for each node Apr 12 18:28:50.040568 kernel: Early memory node ranges Apr 12 18:28:50.040573 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 12 18:28:50.040579 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Apr 12 18:28:50.040585 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Apr 12 18:28:50.040590 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Apr 12 18:28:50.040596 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Apr 12 18:28:50.040602 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Apr 12 18:28:50.040607 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Apr 12 18:28:50.040613 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Apr 12 18:28:50.040619 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 12 18:28:50.040626 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 12 18:28:50.040635 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 12 18:28:50.040641 kernel: psci: probing for conduit method from ACPI. Apr 12 18:28:50.040647 kernel: psci: PSCIv1.1 detected in firmware. Apr 12 18:28:50.040653 kernel: psci: Using standard PSCI v0.2 function IDs Apr 12 18:28:50.040661 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 12 18:28:50.040667 kernel: psci: SMC Calling Convention v1.4 Apr 12 18:28:50.040673 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Apr 12 18:28:50.040679 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Apr 12 18:28:50.040685 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Apr 12 18:28:50.040691 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Apr 12 18:28:50.040697 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 12 18:28:50.040703 kernel: Detected PIPT I-cache on CPU0 Apr 12 18:28:50.040709 kernel: CPU features: detected: GIC system register CPU interface Apr 12 18:28:50.040715 kernel: CPU features: detected: Hardware dirty bit management Apr 12 18:28:50.040721 kernel: CPU features: detected: Spectre-BHB Apr 12 18:28:50.040728 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 12 18:28:50.040735 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 12 18:28:50.040741 kernel: CPU features: detected: ARM erratum 1418040 Apr 12 18:28:50.040747 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Apr 12 18:28:50.040753 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Apr 12 18:28:50.040759 kernel: Policy zone: Normal Apr 12 18:28:50.040767 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c0b96868344262519ffdb2dae3782c942008a0fecdbc0bc85d2e170bd2e8b8a8 Apr 12 18:28:50.040773 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 12 18:28:50.040780 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 12 18:28:50.040786 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 12 18:28:50.040792 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 12 18:28:50.040799 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Apr 12 18:28:50.040806 kernel: Memory: 3990260K/4194160K available (9792K kernel code, 2092K rwdata, 7568K rodata, 36352K init, 777K bss, 203900K reserved, 0K cma-reserved) Apr 12 18:28:50.040812 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 12 18:28:50.040818 kernel: trace event string verifier disabled Apr 12 18:28:50.040824 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 12 18:28:50.040830 kernel: rcu: RCU event tracing is enabled. Apr 12 18:28:50.040837 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 12 18:28:50.040843 kernel: Trampoline variant of Tasks RCU enabled. Apr 12 18:28:50.040849 kernel: Tracing variant of Tasks RCU enabled. Apr 12 18:28:50.040855 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 12 18:28:50.040861 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 12 18:28:50.040868 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 12 18:28:50.040875 kernel: GICv3: 960 SPIs implemented Apr 12 18:28:50.040881 kernel: GICv3: 0 Extended SPIs implemented Apr 12 18:28:50.040887 kernel: GICv3: Distributor has no Range Selector support Apr 12 18:28:50.040893 kernel: Root IRQ handler: gic_handle_irq Apr 12 18:28:50.040899 kernel: GICv3: 16 PPIs implemented Apr 12 18:28:50.040905 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 12 18:28:50.040911 kernel: ITS: No ITS available, not enabling LPIs Apr 12 18:28:50.040918 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 12 18:28:50.040924 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 12 18:28:50.040930 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 12 18:28:50.040936 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 12 18:28:50.040944 kernel: Console: colour dummy device 80x25 Apr 12 18:28:50.040950 kernel: printk: console [tty1] enabled Apr 12 18:28:50.040956 kernel: ACPI: Core revision 20210730 Apr 12 18:28:50.040963 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 12 18:28:50.040970 kernel: pid_max: default: 32768 minimum: 301 Apr 12 18:28:50.040976 kernel: LSM: Security Framework initializing Apr 12 18:28:50.040982 kernel: SELinux: Initializing. Apr 12 18:28:50.040989 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:28:50.040995 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:28:50.041002 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Apr 12 18:28:50.041009 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Apr 12 18:28:50.041015 kernel: rcu: Hierarchical SRCU implementation. Apr 12 18:28:50.041021 kernel: Remapping and enabling EFI services. Apr 12 18:28:50.041027 kernel: smp: Bringing up secondary CPUs ... Apr 12 18:28:50.041033 kernel: Detected PIPT I-cache on CPU1 Apr 12 18:28:50.041040 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 12 18:28:50.041046 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 12 18:28:50.041052 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 12 18:28:50.041060 kernel: smp: Brought up 1 node, 2 CPUs Apr 12 18:28:50.041066 kernel: SMP: Total of 2 processors activated. Apr 12 18:28:50.041072 kernel: CPU features: detected: 32-bit EL0 Support Apr 12 18:28:50.041079 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 12 18:28:50.041086 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 12 18:28:50.041092 kernel: CPU features: detected: CRC32 instructions Apr 12 18:28:50.041098 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 12 18:28:50.041104 kernel: CPU features: detected: LSE atomic instructions Apr 12 18:28:50.041110 kernel: CPU features: detected: Privileged Access Never Apr 12 18:28:50.041118 kernel: CPU: All CPU(s) started at EL1 Apr 12 18:28:50.041124 kernel: alternatives: patching kernel code Apr 12 18:28:50.041135 kernel: devtmpfs: initialized Apr 12 18:28:50.041142 kernel: KASLR enabled Apr 12 18:28:50.041149 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 12 18:28:50.041156 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 12 18:28:50.041162 kernel: pinctrl core: initialized pinctrl subsystem Apr 12 18:28:50.041169 kernel: SMBIOS 3.1.0 present. Apr 12 18:28:50.041175 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Apr 12 18:28:50.041182 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 12 18:28:50.041190 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 12 18:28:50.041197 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 12 18:28:50.041203 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 12 18:28:50.041210 kernel: audit: initializing netlink subsys (disabled) Apr 12 18:28:50.041216 kernel: audit: type=2000 audit(0.091:1): state=initialized audit_enabled=0 res=1 Apr 12 18:28:50.041223 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 12 18:28:50.041230 kernel: cpuidle: using governor menu Apr 12 18:28:50.041238 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 12 18:28:50.041245 kernel: ASID allocator initialised with 32768 entries Apr 12 18:28:50.041251 kernel: ACPI: bus type PCI registered Apr 12 18:28:50.041258 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 12 18:28:50.041264 kernel: Serial: AMBA PL011 UART driver Apr 12 18:28:50.041271 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Apr 12 18:28:50.041278 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Apr 12 18:28:50.041284 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Apr 12 18:28:50.041291 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Apr 12 18:28:50.041299 kernel: cryptd: max_cpu_qlen set to 1000 Apr 12 18:28:50.041305 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 12 18:28:50.041312 kernel: ACPI: Added _OSI(Module Device) Apr 12 18:28:50.041318 kernel: ACPI: Added _OSI(Processor Device) Apr 12 18:28:50.041325 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 12 18:28:50.041340 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 12 18:28:50.041347 kernel: ACPI: Added _OSI(Linux-Dell-Video) Apr 12 18:28:50.041354 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Apr 12 18:28:50.041360 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Apr 12 18:28:50.041368 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 12 18:28:50.041375 kernel: ACPI: Interpreter enabled Apr 12 18:28:50.041382 kernel: ACPI: Using GIC for interrupt routing Apr 12 18:28:50.041388 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 12 18:28:50.041395 kernel: printk: console [ttyAMA0] enabled Apr 12 18:28:50.041401 kernel: printk: bootconsole [pl11] disabled Apr 12 18:28:50.041408 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 12 18:28:50.041415 kernel: iommu: Default domain type: Translated Apr 12 18:28:50.041421 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 12 18:28:50.041429 kernel: vgaarb: loaded Apr 12 18:28:50.041436 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 12 18:28:50.041442 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 12 18:28:50.041449 kernel: PTP clock support registered Apr 12 18:28:50.041456 kernel: Registered efivars operations Apr 12 18:28:50.041462 kernel: No ACPI PMU IRQ for CPU0 Apr 12 18:28:50.041469 kernel: No ACPI PMU IRQ for CPU1 Apr 12 18:28:50.041475 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 12 18:28:50.041482 kernel: VFS: Disk quotas dquot_6.6.0 Apr 12 18:28:50.041489 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 12 18:28:50.041496 kernel: pnp: PnP ACPI init Apr 12 18:28:50.041502 kernel: pnp: PnP ACPI: found 0 devices Apr 12 18:28:50.041509 kernel: NET: Registered PF_INET protocol family Apr 12 18:28:50.041516 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 12 18:28:50.041522 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 12 18:28:50.041529 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 12 18:28:50.041536 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 12 18:28:50.041543 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Apr 12 18:28:50.041551 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 12 18:28:50.041558 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:28:50.041564 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:28:50.041571 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 12 18:28:50.041578 kernel: PCI: CLS 0 bytes, default 64 Apr 12 18:28:50.041584 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Apr 12 18:28:50.041591 kernel: kvm [1]: HYP mode not available Apr 12 18:28:50.041598 kernel: Initialise system trusted keyrings Apr 12 18:28:50.041604 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 12 18:28:50.041612 kernel: Key type asymmetric registered Apr 12 18:28:50.041618 kernel: Asymmetric key parser 'x509' registered Apr 12 18:28:50.041625 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 12 18:28:50.041632 kernel: io scheduler mq-deadline registered Apr 12 18:28:50.041638 kernel: io scheduler kyber registered Apr 12 18:28:50.041645 kernel: io scheduler bfq registered Apr 12 18:28:50.041651 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 12 18:28:50.041658 kernel: thunder_xcv, ver 1.0 Apr 12 18:28:50.041664 kernel: thunder_bgx, ver 1.0 Apr 12 18:28:50.041672 kernel: nicpf, ver 1.0 Apr 12 18:28:50.041678 kernel: nicvf, ver 1.0 Apr 12 18:28:50.041808 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 12 18:28:50.041872 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-04-12T18:28:49 UTC (1712946529) Apr 12 18:28:50.041881 kernel: efifb: probing for efifb Apr 12 18:28:50.041887 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 12 18:28:50.041894 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 12 18:28:50.041901 kernel: efifb: scrolling: redraw Apr 12 18:28:50.041910 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 12 18:28:50.041917 kernel: Console: switching to colour frame buffer device 128x48 Apr 12 18:28:50.041924 kernel: fb0: EFI VGA frame buffer device Apr 12 18:28:50.041930 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 12 18:28:50.041937 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 12 18:28:50.041944 kernel: NET: Registered PF_INET6 protocol family Apr 12 18:28:50.041958 kernel: Segment Routing with IPv6 Apr 12 18:28:50.041965 kernel: In-situ OAM (IOAM) with IPv6 Apr 12 18:28:50.041971 kernel: NET: Registered PF_PACKET protocol family Apr 12 18:28:50.041979 kernel: Key type dns_resolver registered Apr 12 18:28:50.041986 kernel: registered taskstats version 1 Apr 12 18:28:50.041992 kernel: Loading compiled-in X.509 certificates Apr 12 18:28:50.041999 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.154-flatcar: 8c258d82bbd8df4a9da2c0ea4108142f04be6b34' Apr 12 18:28:50.042006 kernel: Key type .fscrypt registered Apr 12 18:28:50.042017 kernel: Key type fscrypt-provisioning registered Apr 12 18:28:50.042023 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 12 18:28:50.042030 kernel: ima: Allocated hash algorithm: sha1 Apr 12 18:28:50.042036 kernel: ima: No architecture policies found Apr 12 18:28:50.042044 kernel: Freeing unused kernel memory: 36352K Apr 12 18:28:50.042055 kernel: Run /init as init process Apr 12 18:28:50.042062 kernel: with arguments: Apr 12 18:28:50.042068 kernel: /init Apr 12 18:28:50.042075 kernel: with environment: Apr 12 18:28:50.042081 kernel: HOME=/ Apr 12 18:28:50.042092 kernel: TERM=linux Apr 12 18:28:50.042099 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 12 18:28:50.042108 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:28:50.042119 systemd[1]: Detected virtualization microsoft. Apr 12 18:28:50.042126 systemd[1]: Detected architecture arm64. Apr 12 18:28:50.042133 systemd[1]: Running in initrd. Apr 12 18:28:50.042140 systemd[1]: No hostname configured, using default hostname. Apr 12 18:28:50.042147 systemd[1]: Hostname set to . Apr 12 18:28:50.042154 systemd[1]: Initializing machine ID from random generator. Apr 12 18:28:50.042161 systemd[1]: Queued start job for default target initrd.target. Apr 12 18:28:50.042174 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:28:50.042181 systemd[1]: Reached target cryptsetup.target. Apr 12 18:28:50.042188 systemd[1]: Reached target paths.target. Apr 12 18:28:50.042194 systemd[1]: Reached target slices.target. Apr 12 18:28:50.042201 systemd[1]: Reached target swap.target. Apr 12 18:28:50.042208 systemd[1]: Reached target timers.target. Apr 12 18:28:50.042216 systemd[1]: Listening on iscsid.socket. Apr 12 18:28:50.042223 systemd[1]: Listening on iscsiuio.socket. Apr 12 18:28:50.042231 systemd[1]: Listening on systemd-journald-audit.socket. Apr 12 18:28:50.042239 systemd[1]: Listening on systemd-journald-dev-log.socket. Apr 12 18:28:50.042246 systemd[1]: Listening on systemd-journald.socket. Apr 12 18:28:50.042257 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:28:50.042264 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:28:50.042271 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:28:50.042278 systemd[1]: Reached target sockets.target. Apr 12 18:28:50.042285 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:28:50.042292 systemd[1]: Finished network-cleanup.service. Apr 12 18:28:50.042300 systemd[1]: Starting systemd-fsck-usr.service... Apr 12 18:28:50.042307 systemd[1]: Starting systemd-journald.service... Apr 12 18:28:50.042318 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:28:50.042326 systemd[1]: Starting systemd-resolved.service... Apr 12 18:28:50.044411 systemd[1]: Starting systemd-vconsole-setup.service... Apr 12 18:28:50.044430 systemd-journald[235]: Journal started Apr 12 18:28:50.044496 systemd-journald[235]: Runtime Journal (/run/log/journal/448be8a68b1141069907fb821fba7d46) is 8.0M, max 78.6M, 70.6M free. Apr 12 18:28:50.018410 systemd-modules-load[236]: Inserted module 'overlay' Apr 12 18:28:50.072617 systemd[1]: Started systemd-journald.service. Apr 12 18:28:50.072642 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 12 18:28:50.063629 systemd-resolved[237]: Positive Trust Anchors: Apr 12 18:28:50.063639 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:28:50.130410 kernel: Bridge firewalling registered Apr 12 18:28:50.130444 kernel: audit: type=1130 audit(1712946530.092:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.130455 kernel: SCSI subsystem initialized Apr 12 18:28:50.130464 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 12 18:28:50.130473 kernel: device-mapper: uevent: version 1.0.3 Apr 12 18:28:50.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.063672 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:28:50.184077 kernel: audit: type=1130 audit(1712946530.134:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.065803 systemd-resolved[237]: Defaulting to hostname 'linux'. Apr 12 18:28:50.253121 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Apr 12 18:28:50.253146 kernel: audit: type=1130 audit(1712946530.203:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.253157 kernel: audit: type=1130 audit(1712946530.227:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.081252 systemd-modules-load[236]: Inserted module 'br_netfilter' Apr 12 18:28:50.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.092898 systemd[1]: Started systemd-resolved.service. Apr 12 18:28:50.298179 kernel: audit: type=1130 audit(1712946530.256:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.298208 kernel: audit: type=1130 audit(1712946530.277:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.184679 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:28:50.198628 systemd-modules-load[236]: Inserted module 'dm_multipath' Apr 12 18:28:50.204005 systemd[1]: Finished systemd-fsck-usr.service. Apr 12 18:28:50.227551 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:28:50.256758 systemd[1]: Finished systemd-vconsole-setup.service. Apr 12 18:28:50.278482 systemd[1]: Reached target nss-lookup.target. Apr 12 18:28:50.373708 kernel: audit: type=1130 audit(1712946530.348:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.306493 systemd[1]: Starting dracut-cmdline-ask.service... Apr 12 18:28:50.315485 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:28:50.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.332747 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:28:50.428436 kernel: audit: type=1130 audit(1712946530.373:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.428463 kernel: audit: type=1130 audit(1712946530.397:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.343398 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:28:50.348731 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:28:50.438016 dracut-cmdline[258]: dracut-dracut-053 Apr 12 18:28:50.438016 dracut-cmdline[258]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c0b96868344262519ffdb2dae3782c942008a0fecdbc0bc85d2e170bd2e8b8a8 Apr 12 18:28:50.373986 systemd[1]: Finished dracut-cmdline-ask.service. Apr 12 18:28:50.399359 systemd[1]: Starting dracut-cmdline.service... Apr 12 18:28:50.509351 kernel: Loading iSCSI transport class v2.0-870. Apr 12 18:28:50.523372 kernel: iscsi: registered transport (tcp) Apr 12 18:28:50.544032 kernel: iscsi: registered transport (qla4xxx) Apr 12 18:28:50.544093 kernel: QLogic iSCSI HBA Driver Apr 12 18:28:50.582136 systemd[1]: Finished dracut-cmdline.service. Apr 12 18:28:50.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:50.587549 systemd[1]: Starting dracut-pre-udev.service... Apr 12 18:28:50.643354 kernel: raid6: neonx8 gen() 13816 MB/s Apr 12 18:28:50.661344 kernel: raid6: neonx8 xor() 10822 MB/s Apr 12 18:28:50.681344 kernel: raid6: neonx4 gen() 13523 MB/s Apr 12 18:28:50.702359 kernel: raid6: neonx4 xor() 10859 MB/s Apr 12 18:28:50.722354 kernel: raid6: neonx2 gen() 12958 MB/s Apr 12 18:28:50.742343 kernel: raid6: neonx2 xor() 10382 MB/s Apr 12 18:28:50.763342 kernel: raid6: neonx1 gen() 10565 MB/s Apr 12 18:28:50.783342 kernel: raid6: neonx1 xor() 8791 MB/s Apr 12 18:28:50.803343 kernel: raid6: int64x8 gen() 6262 MB/s Apr 12 18:28:50.824342 kernel: raid6: int64x8 xor() 3545 MB/s Apr 12 18:28:50.844341 kernel: raid6: int64x4 gen() 7202 MB/s Apr 12 18:28:50.864342 kernel: raid6: int64x4 xor() 3854 MB/s Apr 12 18:28:50.885346 kernel: raid6: int64x2 gen() 6152 MB/s Apr 12 18:28:50.905341 kernel: raid6: int64x2 xor() 3321 MB/s Apr 12 18:28:50.925341 kernel: raid6: int64x1 gen() 5036 MB/s Apr 12 18:28:50.950616 kernel: raid6: int64x1 xor() 2646 MB/s Apr 12 18:28:50.950627 kernel: raid6: using algorithm neonx8 gen() 13816 MB/s Apr 12 18:28:50.950636 kernel: raid6: .... xor() 10822 MB/s, rmw enabled Apr 12 18:28:50.960764 kernel: raid6: using neon recovery algorithm Apr 12 18:28:50.974348 kernel: xor: measuring software checksum speed Apr 12 18:28:50.974361 kernel: 8regs : 17289 MB/sec Apr 12 18:28:50.985813 kernel: 32regs : 20755 MB/sec Apr 12 18:28:50.985824 kernel: arm64_neon : 27930 MB/sec Apr 12 18:28:50.985831 kernel: xor: using function: arm64_neon (27930 MB/sec) Apr 12 18:28:51.046355 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Apr 12 18:28:51.057721 systemd[1]: Finished dracut-pre-udev.service. Apr 12 18:28:51.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:51.065000 audit: BPF prog-id=7 op=LOAD Apr 12 18:28:51.065000 audit: BPF prog-id=8 op=LOAD Apr 12 18:28:51.066616 systemd[1]: Starting systemd-udevd.service... Apr 12 18:28:51.081263 systemd-udevd[435]: Using default interface naming scheme 'v252'. Apr 12 18:28:51.087719 systemd[1]: Started systemd-udevd.service. Apr 12 18:28:51.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:51.097893 systemd[1]: Starting dracut-pre-trigger.service... Apr 12 18:28:51.117120 dracut-pre-trigger[456]: rd.md=0: removing MD RAID activation Apr 12 18:28:51.145414 systemd[1]: Finished dracut-pre-trigger.service. Apr 12 18:28:51.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:51.151491 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:28:51.189609 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:28:51.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:51.244359 kernel: hv_vmbus: Vmbus version:5.3 Apr 12 18:28:51.267128 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 12 18:28:51.267186 kernel: hv_vmbus: registering driver hid_hyperv Apr 12 18:28:51.267196 kernel: hv_vmbus: registering driver hv_storvsc Apr 12 18:28:51.287005 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 12 18:28:51.287056 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 12 18:28:51.287077 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 12 18:28:51.299620 kernel: hv_vmbus: registering driver hv_netvsc Apr 12 18:28:51.299678 kernel: scsi host1: storvsc_host_t Apr 12 18:28:51.306353 kernel: scsi host0: storvsc_host_t Apr 12 18:28:51.314756 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 12 18:28:51.321660 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 12 18:28:51.347125 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 12 18:28:51.347412 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 12 18:28:51.355357 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 12 18:28:51.355540 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 12 18:28:51.355629 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 12 18:28:51.364689 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 12 18:28:51.364887 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 12 18:28:51.364972 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 12 18:28:51.377368 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 18:28:51.382367 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 12 18:28:51.401352 kernel: hv_netvsc 002248bc-42bb-0022-48bc-42bb002248bc eth0: VF slot 1 added Apr 12 18:28:51.410369 kernel: hv_vmbus: registering driver hv_pci Apr 12 18:28:51.420589 kernel: hv_pci b3ef2023-599c-465d-95ef-c4065aa291b6: PCI VMBus probing: Using version 0x10004 Apr 12 18:28:51.420809 kernel: hv_pci b3ef2023-599c-465d-95ef-c4065aa291b6: PCI host bridge to bus 599c:00 Apr 12 18:28:51.435586 kernel: pci_bus 599c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 12 18:28:51.435761 kernel: pci_bus 599c:00: No busn resource found for root bus, will use [bus 00-ff] Apr 12 18:28:51.447372 kernel: pci 599c:00:02.0: [15b3:1018] type 00 class 0x020000 Apr 12 18:28:51.458370 kernel: pci 599c:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 12 18:28:51.480404 kernel: pci 599c:00:02.0: enabling Extended Tags Apr 12 18:28:51.498384 kernel: pci 599c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 599c:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Apr 12 18:28:51.511601 kernel: pci_bus 599c:00: busn_res: [bus 00-ff] end is updated to 00 Apr 12 18:28:51.511768 kernel: pci 599c:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 12 18:28:51.552356 kernel: mlx5_core 599c:00:02.0: firmware version: 16.30.1284 Apr 12 18:28:51.714350 kernel: mlx5_core 599c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Apr 12 18:28:51.752792 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Apr 12 18:28:51.781353 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (492) Apr 12 18:28:51.797890 kernel: hv_netvsc 002248bc-42bb-0022-48bc-42bb002248bc eth0: VF registering: eth1 Apr 12 18:28:51.798092 kernel: mlx5_core 599c:00:02.0 eth1: joined to eth0 Apr 12 18:28:51.796672 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:28:51.820388 kernel: mlx5_core 599c:00:02.0 enP22940s1: renamed from eth1 Apr 12 18:28:51.929084 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Apr 12 18:28:51.941153 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Apr 12 18:28:51.948636 systemd[1]: Starting disk-uuid.service... Apr 12 18:28:51.984081 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 18:28:51.977398 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Apr 12 18:28:53.003350 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 18:28:53.003756 disk-uuid[564]: The operation has completed successfully. Apr 12 18:28:53.066233 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 12 18:28:53.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:53.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:53.066364 systemd[1]: Finished disk-uuid.service. Apr 12 18:28:53.071273 systemd[1]: Starting verity-setup.service... Apr 12 18:28:53.115873 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 12 18:28:53.351006 systemd[1]: Found device dev-mapper-usr.device. Apr 12 18:28:53.356541 systemd[1]: Mounting sysusr-usr.mount... Apr 12 18:28:53.366079 systemd[1]: Finished verity-setup.service. Apr 12 18:28:53.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:53.420355 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Apr 12 18:28:53.420496 systemd[1]: Mounted sysusr-usr.mount. Apr 12 18:28:53.424511 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Apr 12 18:28:53.425362 systemd[1]: Starting ignition-setup.service... Apr 12 18:28:53.434164 systemd[1]: Starting parse-ip-for-networkd.service... Apr 12 18:28:53.467221 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 12 18:28:53.467284 kernel: BTRFS info (device sda6): using free space tree Apr 12 18:28:53.472863 kernel: BTRFS info (device sda6): has skinny extents Apr 12 18:28:53.536452 systemd[1]: Finished parse-ip-for-networkd.service. Apr 12 18:28:53.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:53.545000 audit: BPF prog-id=9 op=LOAD Apr 12 18:28:53.547014 systemd[1]: Starting systemd-networkd.service... Apr 12 18:28:53.557920 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 12 18:28:53.570854 systemd-networkd[809]: lo: Link UP Apr 12 18:28:53.570865 systemd-networkd[809]: lo: Gained carrier Apr 12 18:28:53.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:53.571767 systemd-networkd[809]: Enumeration completed Apr 12 18:28:53.574155 systemd[1]: Started systemd-networkd.service. Apr 12 18:28:53.575069 systemd-networkd[809]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:28:53.583731 systemd[1]: Reached target network.target. Apr 12 18:28:53.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:53.592514 systemd[1]: Starting iscsiuio.service... Apr 12 18:28:53.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:53.606439 systemd[1]: Started iscsiuio.service. Apr 12 18:28:53.617355 systemd[1]: Finished ignition-setup.service. Apr 12 18:28:53.626426 systemd[1]: Starting ignition-fetch-offline.service... Apr 12 18:28:53.638290 systemd[1]: Starting iscsid.service... Apr 12 18:28:53.656743 iscsid[816]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:28:53.656743 iscsid[816]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Apr 12 18:28:53.656743 iscsid[816]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Apr 12 18:28:53.656743 iscsid[816]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Apr 12 18:28:53.656743 iscsid[816]: If using hardware iscsi like qla4xxx this message can be ignored. Apr 12 18:28:53.656743 iscsid[816]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:28:53.656743 iscsid[816]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Apr 12 18:28:53.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:53.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:53.664376 systemd[1]: Started iscsid.service. Apr 12 18:28:53.669321 systemd[1]: Starting dracut-initqueue.service... Apr 12 18:28:53.772441 kernel: mlx5_core 599c:00:02.0 enP22940s1: Link up Apr 12 18:28:53.717753 systemd[1]: Finished dracut-initqueue.service. Apr 12 18:28:53.722412 systemd[1]: Reached target remote-fs-pre.target. Apr 12 18:28:53.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:53.733817 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:28:53.746346 systemd[1]: Reached target remote-fs.target. Apr 12 18:28:53.758852 systemd[1]: Starting dracut-pre-mount.service... Apr 12 18:28:53.774942 systemd[1]: Finished dracut-pre-mount.service. Apr 12 18:28:53.817521 kernel: hv_netvsc 002248bc-42bb-0022-48bc-42bb002248bc eth0: Data path switched to VF: enP22940s1 Apr 12 18:28:53.817686 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Apr 12 18:28:53.808930 systemd-networkd[809]: enP22940s1: Link UP Apr 12 18:28:53.809007 systemd-networkd[809]: eth0: Link UP Apr 12 18:28:53.817801 systemd-networkd[809]: eth0: Gained carrier Apr 12 18:28:53.825944 systemd-networkd[809]: enP22940s1: Gained carrier Apr 12 18:28:53.837459 systemd-networkd[809]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 12 18:28:54.874446 systemd-networkd[809]: eth0: Gained IPv6LL Apr 12 18:28:55.809598 ignition[815]: Ignition 2.14.0 Apr 12 18:28:55.813090 ignition[815]: Stage: fetch-offline Apr 12 18:28:55.813199 ignition[815]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:28:55.813229 ignition[815]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:28:55.901417 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:28:55.901557 ignition[815]: parsed url from cmdline: "" Apr 12 18:28:55.901560 ignition[815]: no config URL provided Apr 12 18:28:55.901566 ignition[815]: reading system config file "/usr/lib/ignition/user.ign" Apr 12 18:28:55.942508 kernel: kauditd_printk_skb: 18 callbacks suppressed Apr 12 18:28:55.942534 kernel: audit: type=1130 audit(1712946535.915:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:55.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:55.911369 systemd[1]: Finished ignition-fetch-offline.service. Apr 12 18:28:55.901573 ignition[815]: no config at "/usr/lib/ignition/user.ign" Apr 12 18:28:55.917232 systemd[1]: Starting ignition-fetch.service... Apr 12 18:28:55.901579 ignition[815]: failed to fetch config: resource requires networking Apr 12 18:28:55.901917 ignition[815]: Ignition finished successfully Apr 12 18:28:55.940484 ignition[835]: Ignition 2.14.0 Apr 12 18:28:55.940491 ignition[835]: Stage: fetch Apr 12 18:28:55.940609 ignition[835]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:28:55.940628 ignition[835]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:28:55.947247 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:28:55.950282 ignition[835]: parsed url from cmdline: "" Apr 12 18:28:55.950287 ignition[835]: no config URL provided Apr 12 18:28:55.950296 ignition[835]: reading system config file "/usr/lib/ignition/user.ign" Apr 12 18:28:55.950310 ignition[835]: no config at "/usr/lib/ignition/user.ign" Apr 12 18:28:55.950360 ignition[835]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 12 18:28:55.970724 ignition[835]: GET result: OK Apr 12 18:28:55.970817 ignition[835]: config has been read from IMDS userdata Apr 12 18:28:55.970884 ignition[835]: parsing config with SHA512: 4070ca93125df6e443dcee5f0de42511df1dbfdda09021a32a69eeb0e819cc3bcc5897b93d8c7f3fb52a76ca14b6bcec978b559e1c7bdb82b84d95928ffda91f Apr 12 18:28:56.033244 unknown[835]: fetched base config from "system" Apr 12 18:28:56.033256 unknown[835]: fetched base config from "system" Apr 12 18:28:56.034025 ignition[835]: fetch: fetch complete Apr 12 18:28:56.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:56.033261 unknown[835]: fetched user config from "azure" Apr 12 18:28:56.076060 kernel: audit: type=1130 audit(1712946536.046:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:56.034031 ignition[835]: fetch: fetch passed Apr 12 18:28:56.038587 systemd[1]: Finished ignition-fetch.service. Apr 12 18:28:56.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:56.034078 ignition[835]: Ignition finished successfully Apr 12 18:28:56.112036 kernel: audit: type=1130 audit(1712946536.092:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:56.066067 systemd[1]: Starting ignition-kargs.service... Apr 12 18:28:56.077213 ignition[841]: Ignition 2.14.0 Apr 12 18:28:56.088087 systemd[1]: Finished ignition-kargs.service. Apr 12 18:28:56.077219 ignition[841]: Stage: kargs Apr 12 18:28:56.077382 ignition[841]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:28:56.127019 systemd[1]: Starting ignition-disks.service... Apr 12 18:28:56.077402 ignition[841]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:28:56.080106 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:28:56.082911 ignition[841]: kargs: kargs passed Apr 12 18:28:56.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:56.149435 systemd[1]: Finished ignition-disks.service. Apr 12 18:28:56.191874 kernel: audit: type=1130 audit(1712946536.155:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:56.082985 ignition[841]: Ignition finished successfully Apr 12 18:28:56.173518 systemd[1]: Reached target initrd-root-device.target. Apr 12 18:28:56.141035 ignition[847]: Ignition 2.14.0 Apr 12 18:28:56.179610 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:28:56.141042 ignition[847]: Stage: disks Apr 12 18:28:56.183906 systemd[1]: Reached target local-fs.target. Apr 12 18:28:56.141181 ignition[847]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:28:56.188414 systemd[1]: Reached target sysinit.target. Apr 12 18:28:56.141210 ignition[847]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:28:56.195801 systemd[1]: Reached target basic.target. Apr 12 18:28:56.146216 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:28:56.207948 systemd[1]: Starting systemd-fsck-root.service... Apr 12 18:28:56.148411 ignition[847]: disks: disks passed Apr 12 18:28:56.148483 ignition[847]: Ignition finished successfully Apr 12 18:28:56.288646 systemd-fsck[856]: ROOT: clean, 612/7326000 files, 481074/7359488 blocks Apr 12 18:28:56.295479 systemd[1]: Finished systemd-fsck-root.service. Apr 12 18:28:56.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:56.323361 systemd[1]: Mounting sysroot.mount... Apr 12 18:28:56.330260 kernel: audit: type=1130 audit(1712946536.301:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:56.346354 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Apr 12 18:28:56.346563 systemd[1]: Mounted sysroot.mount. Apr 12 18:28:56.350565 systemd[1]: Reached target initrd-root-fs.target. Apr 12 18:28:56.382893 systemd[1]: Mounting sysroot-usr.mount... Apr 12 18:28:56.387795 systemd[1]: Starting flatcar-metadata-hostname.service... Apr 12 18:28:56.395236 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 12 18:28:56.395273 systemd[1]: Reached target ignition-diskful.target. Apr 12 18:28:56.400986 systemd[1]: Mounted sysroot-usr.mount. Apr 12 18:28:56.452714 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:28:56.458627 systemd[1]: Starting initrd-setup-root.service... Apr 12 18:28:56.479374 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (866) Apr 12 18:28:56.490490 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 12 18:28:56.490520 kernel: BTRFS info (device sda6): using free space tree Apr 12 18:28:56.494865 kernel: BTRFS info (device sda6): has skinny extents Apr 12 18:28:56.498295 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:28:56.507847 initrd-setup-root[871]: cut: /sysroot/etc/passwd: No such file or directory Apr 12 18:28:56.527284 initrd-setup-root[897]: cut: /sysroot/etc/group: No such file or directory Apr 12 18:28:56.550454 initrd-setup-root[905]: cut: /sysroot/etc/shadow: No such file or directory Apr 12 18:28:56.559816 initrd-setup-root[913]: cut: /sysroot/etc/gshadow: No such file or directory Apr 12 18:28:57.091836 systemd[1]: Finished initrd-setup-root.service. Apr 12 18:28:57.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.115819 systemd[1]: Starting ignition-mount.service... Apr 12 18:28:57.126694 kernel: audit: type=1130 audit(1712946537.096:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.127853 systemd[1]: Starting sysroot-boot.service... Apr 12 18:28:57.137585 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Apr 12 18:28:57.137698 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Apr 12 18:28:57.156947 systemd[1]: Finished sysroot-boot.service. Apr 12 18:28:57.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.181393 kernel: audit: type=1130 audit(1712946537.161:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.262381 ignition[935]: INFO : Ignition 2.14.0 Apr 12 18:28:57.266454 ignition[935]: INFO : Stage: mount Apr 12 18:28:57.271260 ignition[935]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:28:57.271260 ignition[935]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:28:57.294672 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:28:57.294672 ignition[935]: INFO : mount: mount passed Apr 12 18:28:57.294672 ignition[935]: INFO : Ignition finished successfully Apr 12 18:28:57.329940 kernel: audit: type=1130 audit(1712946537.304:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.294959 systemd[1]: Finished ignition-mount.service. Apr 12 18:28:57.667358 coreos-metadata[865]: Apr 12 18:28:57.667 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 12 18:28:57.675361 coreos-metadata[865]: Apr 12 18:28:57.671 INFO Fetch successful Apr 12 18:28:57.710546 coreos-metadata[865]: Apr 12 18:28:57.710 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 12 18:28:57.734599 coreos-metadata[865]: Apr 12 18:28:57.734 INFO Fetch successful Apr 12 18:28:57.748289 coreos-metadata[865]: Apr 12 18:28:57.748 INFO wrote hostname ci-3510.3.3-a-63b2983992 to /sysroot/etc/hostname Apr 12 18:28:57.756785 systemd[1]: Finished flatcar-metadata-hostname.service. Apr 12 18:28:57.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.763018 systemd[1]: Starting ignition-files.service... Apr 12 18:28:57.791061 kernel: audit: type=1130 audit(1712946537.761:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:28:57.790451 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:28:57.821611 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (944) Apr 12 18:28:57.834023 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 12 18:28:57.834080 kernel: BTRFS info (device sda6): using free space tree Apr 12 18:28:57.838551 kernel: BTRFS info (device sda6): has skinny extents Apr 12 18:28:57.843412 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:28:57.857788 ignition[963]: INFO : Ignition 2.14.0 Apr 12 18:28:57.861921 ignition[963]: INFO : Stage: files Apr 12 18:28:57.861921 ignition[963]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:28:57.861921 ignition[963]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:28:57.884199 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:28:57.884199 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Apr 12 18:28:57.884199 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 12 18:28:57.884199 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 12 18:28:57.943869 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 12 18:28:57.951386 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 12 18:28:57.951386 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 12 18:28:57.951386 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Apr 12 18:28:57.951386 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Apr 12 18:28:57.944367 unknown[963]: wrote ssh authorized keys file for user: core Apr 12 18:28:58.245427 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 12 18:28:58.406654 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Apr 12 18:28:58.425597 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Apr 12 18:28:58.425597 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 12 18:28:58.425597 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 12 18:28:58.737838 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 12 18:28:58.960538 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 12 18:28:58.971032 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Apr 12 18:28:58.971032 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Apr 12 18:28:59.239320 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 12 18:28:59.518032 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Apr 12 18:28:59.534004 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Apr 12 18:28:59.534004 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:28:59.534004 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubectl: attempt #1 Apr 12 18:28:59.664645 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Apr 12 18:28:59.973969 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 14be61ec35669a27acf2df0380afb85b9b42311d50ca1165718421c5f605df1119ec9ae314696a674051712e80deeaa65e62d2d62ed4d107fe99d0aaf419dafc Apr 12 18:28:59.991883 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:28:59.991883 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:28:59.991883 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubeadm: attempt #1 Apr 12 18:29:00.032671 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Apr 12 18:29:00.340746 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 45b3100984c979ba0f1c0df8f4211474c2d75ebe916e677dff5fc8e3b3697cf7a953da94e356f39684cc860dff6878b772b7514c55651c2f866d9efeef23f970 Apr 12 18:29:00.358753 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:29:00.358753 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:29:00.358753 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubelet: attempt #1 Apr 12 18:29:00.398879 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Apr 12 18:29:01.060507 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 71857ff499ae135fa478e1827a0ed8865e578a8d2b1e25876e914fd0beba03733801c0654bcd4c0567bafeb16887dafb2dbbe8d1116e6ea28dcd8366c142d348 Apr 12 18:29:01.079586 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:29:01.079586 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:29:01.079586 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:29:01.079586 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:29:01.079586 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 12 18:29:01.360633 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 12 18:29:01.415100 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:29:01.425668 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Apr 12 18:29:01.425668 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Apr 12 18:29:01.425668 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:29:01.425668 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:29:01.425668 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:29:01.425668 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:29:01.425668 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:29:01.425668 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:29:01.425668 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:29:01.425668 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:29:01.425668 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Apr 12 18:29:01.425668 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Apr 12 18:29:01.597434 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (968) Apr 12 18:29:01.597459 kernel: audit: type=1130 audit(1712946541.553:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.598383 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem279518295" Apr 12 18:29:01.598383 ignition[963]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem279518295": device or resource busy Apr 12 18:29:01.598383 ignition[963]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem279518295", trying btrfs: device or resource busy Apr 12 18:29:01.598383 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem279518295" Apr 12 18:29:01.598383 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem279518295" Apr 12 18:29:01.598383 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem279518295" Apr 12 18:29:01.598383 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem279518295" Apr 12 18:29:01.598383 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Apr 12 18:29:01.598383 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Apr 12 18:29:01.598383 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Apr 12 18:29:01.598383 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4023757680" Apr 12 18:29:01.598383 ignition[963]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4023757680": device or resource busy Apr 12 18:29:01.598383 ignition[963]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4023757680", trying btrfs: device or resource busy Apr 12 18:29:01.598383 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4023757680" Apr 12 18:29:01.873870 kernel: audit: type=1130 audit(1712946541.660:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.873914 kernel: audit: type=1131 audit(1712946541.696:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.873928 kernel: audit: type=1130 audit(1712946541.741:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.873938 kernel: audit: type=1130 audit(1712946541.848:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.484067 systemd[1]: mnt-oem279518295.mount: Deactivated successfully. Apr 12 18:29:01.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.895038 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4023757680" Apr 12 18:29:01.895038 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem4023757680" Apr 12 18:29:01.895038 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem4023757680" Apr 12 18:29:01.895038 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Apr 12 18:29:01.895038 ignition[963]: INFO : files: op(18): [started] processing unit "waagent.service" Apr 12 18:29:01.895038 ignition[963]: INFO : files: op(18): [finished] processing unit "waagent.service" Apr 12 18:29:01.895038 ignition[963]: INFO : files: op(19): [started] processing unit "nvidia.service" Apr 12 18:29:01.895038 ignition[963]: INFO : files: op(19): [finished] processing unit "nvidia.service" Apr 12 18:29:01.895038 ignition[963]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Apr 12 18:29:01.895038 ignition[963]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:29:01.895038 ignition[963]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:29:01.895038 ignition[963]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Apr 12 18:29:01.895038 ignition[963]: INFO : files: op(1c): [started] processing unit "prepare-critools.service" Apr 12 18:29:01.895038 ignition[963]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:29:01.895038 ignition[963]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:29:01.895038 ignition[963]: INFO : files: op(1c): [finished] processing unit "prepare-critools.service" Apr 12 18:29:01.895038 ignition[963]: INFO : files: op(1e): [started] processing unit "prepare-helm.service" Apr 12 18:29:01.895038 ignition[963]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:29:01.895038 ignition[963]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:29:02.165473 kernel: audit: type=1131 audit(1712946541.876:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.165504 kernel: audit: type=1130 audit(1712946541.976:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.165520 kernel: audit: type=1131 audit(1712946542.115:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.518898 systemd[1]: mnt-oem4023757680.mount: Deactivated successfully. Apr 12 18:29:02.179063 ignition[963]: INFO : files: op(1e): [finished] processing unit "prepare-helm.service" Apr 12 18:29:02.179063 ignition[963]: INFO : files: op(20): [started] setting preset to enabled for "waagent.service" Apr 12 18:29:02.179063 ignition[963]: INFO : files: op(20): [finished] setting preset to enabled for "waagent.service" Apr 12 18:29:02.179063 ignition[963]: INFO : files: op(21): [started] setting preset to enabled for "nvidia.service" Apr 12 18:29:02.179063 ignition[963]: INFO : files: op(21): [finished] setting preset to enabled for "nvidia.service" Apr 12 18:29:02.179063 ignition[963]: INFO : files: op(22): [started] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:29:02.179063 ignition[963]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:29:02.179063 ignition[963]: INFO : files: op(23): [started] setting preset to enabled for "prepare-critools.service" Apr 12 18:29:02.179063 ignition[963]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-critools.service" Apr 12 18:29:02.179063 ignition[963]: INFO : files: op(24): [started] setting preset to enabled for "prepare-helm.service" Apr 12 18:29:02.179063 ignition[963]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-helm.service" Apr 12 18:29:02.179063 ignition[963]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:29:02.179063 ignition[963]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:29:02.179063 ignition[963]: INFO : files: files passed Apr 12 18:29:02.179063 ignition[963]: INFO : Ignition finished successfully Apr 12 18:29:02.391733 kernel: audit: type=1131 audit(1712946542.276:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.391770 kernel: audit: type=1131 audit(1712946542.317:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.391903 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 12 18:29:01.534692 systemd[1]: Finished ignition-files.service. Apr 12 18:29:02.417728 iscsid[816]: iscsid shutting down. Apr 12 18:29:02.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.585439 systemd[1]: Starting initrd-setup-root-after-ignition.service... Apr 12 18:29:01.603395 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Apr 12 18:29:02.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.613932 systemd[1]: Starting ignition-quench.service... Apr 12 18:29:01.632919 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 12 18:29:02.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.633043 systemd[1]: Finished ignition-quench.service. Apr 12 18:29:02.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.720754 systemd[1]: Finished initrd-setup-root-after-ignition.service. Apr 12 18:29:02.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.493358 ignition[1001]: INFO : Ignition 2.14.0 Apr 12 18:29:02.493358 ignition[1001]: INFO : Stage: umount Apr 12 18:29:02.493358 ignition[1001]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:29:02.493358 ignition[1001]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:29:02.493358 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:29:02.493358 ignition[1001]: INFO : umount: umount passed Apr 12 18:29:02.493358 ignition[1001]: INFO : Ignition finished successfully Apr 12 18:29:02.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.760324 systemd[1]: Reached target ignition-complete.target. Apr 12 18:29:01.783514 systemd[1]: Starting initrd-parse-etc.service... Apr 12 18:29:01.831708 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 12 18:29:01.831850 systemd[1]: Finished initrd-parse-etc.service. Apr 12 18:29:01.876943 systemd[1]: Reached target initrd-fs.target. Apr 12 18:29:01.900757 systemd[1]: Reached target initrd.target. Apr 12 18:29:02.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:01.915398 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Apr 12 18:29:01.926509 systemd[1]: Starting dracut-pre-pivot.service... Apr 12 18:29:01.972281 systemd[1]: Finished dracut-pre-pivot.service. Apr 12 18:29:02.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.026568 systemd[1]: Starting initrd-cleanup.service... Apr 12 18:29:02.054926 systemd[1]: Stopped target nss-lookup.target. Apr 12 18:29:02.064775 systemd[1]: Stopped target remote-cryptsetup.target. Apr 12 18:29:02.086555 systemd[1]: Stopped target timers.target. Apr 12 18:29:02.104377 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 12 18:29:02.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.104453 systemd[1]: Stopped dracut-pre-pivot.service. Apr 12 18:29:02.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.713000 audit: BPF prog-id=6 op=UNLOAD Apr 12 18:29:02.139815 systemd[1]: Stopped target initrd.target. Apr 12 18:29:02.154919 systemd[1]: Stopped target basic.target. Apr 12 18:29:02.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.169528 systemd[1]: Stopped target ignition-complete.target. Apr 12 18:29:02.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.174171 systemd[1]: Stopped target ignition-diskful.target. Apr 12 18:29:02.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.183539 systemd[1]: Stopped target initrd-root-device.target. Apr 12 18:29:02.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.194695 systemd[1]: Stopped target remote-fs.target. Apr 12 18:29:02.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.205753 systemd[1]: Stopped target remote-fs-pre.target. Apr 12 18:29:02.217979 systemd[1]: Stopped target sysinit.target. Apr 12 18:29:02.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.228956 systemd[1]: Stopped target local-fs.target. Apr 12 18:29:02.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.240534 systemd[1]: Stopped target local-fs-pre.target. Apr 12 18:29:02.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.252601 systemd[1]: Stopped target swap.target. Apr 12 18:29:02.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.264527 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 12 18:29:02.264600 systemd[1]: Stopped dracut-pre-mount.service. Apr 12 18:29:02.864908 kernel: hv_netvsc 002248bc-42bb-0022-48bc-42bb002248bc eth0: Data path switched from VF: enP22940s1 Apr 12 18:29:02.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.277129 systemd[1]: Stopped target cryptsetup.target. Apr 12 18:29:02.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.306282 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 12 18:29:02.306362 systemd[1]: Stopped dracut-initqueue.service. Apr 12 18:29:02.318163 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 12 18:29:02.318208 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Apr 12 18:29:02.363017 systemd[1]: ignition-files.service: Deactivated successfully. Apr 12 18:29:02.363083 systemd[1]: Stopped ignition-files.service. Apr 12 18:29:02.377365 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 12 18:29:02.377423 systemd[1]: Stopped flatcar-metadata-hostname.service. Apr 12 18:29:02.387538 systemd[1]: Stopping ignition-mount.service... Apr 12 18:29:02.397665 systemd[1]: Stopping iscsid.service... Apr 12 18:29:02.401255 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 12 18:29:02.401398 systemd[1]: Stopped kmod-static-nodes.service. Apr 12 18:29:02.426580 systemd[1]: Stopping sysroot-boot.service... Apr 12 18:29:02.439730 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 12 18:29:02.439817 systemd[1]: Stopped systemd-udev-trigger.service. Apr 12 18:29:02.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:02.444768 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 12 18:29:02.444824 systemd[1]: Stopped dracut-pre-trigger.service. Apr 12 18:29:02.468850 systemd[1]: iscsid.service: Deactivated successfully. Apr 12 18:29:02.468989 systemd[1]: Stopped iscsid.service. Apr 12 18:29:02.479412 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 12 18:29:02.479579 systemd[1]: Finished initrd-cleanup.service. Apr 12 18:29:02.488325 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 12 18:29:02.488447 systemd[1]: Stopped ignition-mount.service. Apr 12 18:29:02.498025 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 12 18:29:02.498085 systemd[1]: Stopped ignition-disks.service. Apr 12 18:29:02.510530 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 12 18:29:02.510596 systemd[1]: Stopped ignition-kargs.service. Apr 12 18:29:02.522462 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 12 18:29:03.027723 systemd-journald[235]: Received SIGTERM from PID 1 (n/a). Apr 12 18:29:02.522525 systemd[1]: Stopped ignition-fetch.service. Apr 12 18:29:02.552011 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 12 18:29:02.552076 systemd[1]: Stopped ignition-fetch-offline.service. Apr 12 18:29:02.564625 systemd[1]: Stopped target paths.target. Apr 12 18:29:02.573742 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 12 18:29:02.577364 systemd[1]: Stopped systemd-ask-password-console.path. Apr 12 18:29:02.596404 systemd[1]: Stopped target slices.target. Apr 12 18:29:02.604824 systemd[1]: Stopped target sockets.target. Apr 12 18:29:02.615950 systemd[1]: iscsid.socket: Deactivated successfully. Apr 12 18:29:02.616010 systemd[1]: Closed iscsid.socket. Apr 12 18:29:02.624198 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 12 18:29:02.624250 systemd[1]: Stopped ignition-setup.service. Apr 12 18:29:02.632812 systemd[1]: Stopping iscsiuio.service... Apr 12 18:29:02.648529 systemd[1]: iscsiuio.service: Deactivated successfully. Apr 12 18:29:02.648644 systemd[1]: Stopped iscsiuio.service. Apr 12 18:29:02.656727 systemd[1]: Stopped target network.target. Apr 12 18:29:02.665732 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 12 18:29:02.665772 systemd[1]: Closed iscsiuio.socket. Apr 12 18:29:02.677463 systemd[1]: Stopping systemd-networkd.service... Apr 12 18:29:02.685155 systemd[1]: Stopping systemd-resolved.service... Apr 12 18:29:02.693539 systemd-networkd[809]: eth0: DHCPv6 lease lost Apr 12 18:29:03.027000 audit: BPF prog-id=9 op=UNLOAD Apr 12 18:29:02.694770 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 12 18:29:02.694872 systemd[1]: Stopped systemd-networkd.service. Apr 12 18:29:02.703943 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 12 18:29:02.704038 systemd[1]: Stopped systemd-resolved.service. Apr 12 18:29:02.713887 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 12 18:29:02.713932 systemd[1]: Closed systemd-networkd.socket. Apr 12 18:29:02.724010 systemd[1]: Stopping network-cleanup.service... Apr 12 18:29:02.731458 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 12 18:29:02.731531 systemd[1]: Stopped parse-ip-for-networkd.service. Apr 12 18:29:02.736204 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:29:02.736253 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:29:02.749673 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 12 18:29:02.749724 systemd[1]: Stopped systemd-modules-load.service. Apr 12 18:29:02.755750 systemd[1]: Stopping systemd-udevd.service... Apr 12 18:29:02.760914 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 12 18:29:02.761008 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 12 18:29:02.761603 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 12 18:29:02.761691 systemd[1]: Stopped sysroot-boot.service. Apr 12 18:29:02.769319 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 12 18:29:02.769515 systemd[1]: Stopped systemd-udevd.service. Apr 12 18:29:02.779734 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 12 18:29:02.779781 systemd[1]: Closed systemd-udevd-control.socket. Apr 12 18:29:02.787748 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 12 18:29:02.787788 systemd[1]: Closed systemd-udevd-kernel.socket. Apr 12 18:29:02.797015 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 12 18:29:02.797076 systemd[1]: Stopped dracut-pre-udev.service. Apr 12 18:29:02.807543 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 12 18:29:02.807603 systemd[1]: Stopped dracut-cmdline.service. Apr 12 18:29:02.816008 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 12 18:29:02.816057 systemd[1]: Stopped dracut-cmdline-ask.service. Apr 12 18:29:02.825229 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 12 18:29:02.825281 systemd[1]: Stopped initrd-setup-root.service. Apr 12 18:29:02.835046 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Apr 12 18:29:02.849549 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 12 18:29:02.849630 systemd[1]: Stopped systemd-vconsole-setup.service. Apr 12 18:29:02.861409 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 12 18:29:02.861527 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Apr 12 18:29:02.940633 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 12 18:29:02.940757 systemd[1]: Stopped network-cleanup.service. Apr 12 18:29:02.950071 systemd[1]: Reached target initrd-switch-root.target. Apr 12 18:29:02.959455 systemd[1]: Starting initrd-switch-root.service... Apr 12 18:29:02.979603 systemd[1]: Switching root. Apr 12 18:29:03.029362 systemd-journald[235]: Journal stopped Apr 12 18:29:14.252800 kernel: SELinux: Class mctp_socket not defined in policy. Apr 12 18:29:14.252822 kernel: SELinux: Class anon_inode not defined in policy. Apr 12 18:29:14.252840 kernel: SELinux: the above unknown classes and permissions will be allowed Apr 12 18:29:14.252852 kernel: SELinux: policy capability network_peer_controls=1 Apr 12 18:29:14.252860 kernel: SELinux: policy capability open_perms=1 Apr 12 18:29:14.252876 kernel: SELinux: policy capability extended_socket_class=1 Apr 12 18:29:14.252885 kernel: SELinux: policy capability always_check_network=0 Apr 12 18:29:14.252896 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 12 18:29:14.252908 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 12 18:29:14.252921 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 12 18:29:14.252940 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 12 18:29:14.252959 systemd[1]: Successfully loaded SELinux policy in 240.216ms. Apr 12 18:29:14.252976 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.515ms. Apr 12 18:29:14.252997 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:29:14.253017 systemd[1]: Detected virtualization microsoft. Apr 12 18:29:14.253027 systemd[1]: Detected architecture arm64. Apr 12 18:29:14.253043 systemd[1]: Detected first boot. Apr 12 18:29:14.253052 systemd[1]: Hostname set to . Apr 12 18:29:14.253062 systemd[1]: Initializing machine ID from random generator. Apr 12 18:29:14.253078 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Apr 12 18:29:14.253086 kernel: kauditd_printk_skb: 40 callbacks suppressed Apr 12 18:29:14.253096 kernel: audit: type=1400 audit(1712946546.994:88): avc: denied { associate } for pid=1034 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Apr 12 18:29:14.253117 kernel: audit: type=1300 audit(1712946546.994:88): arch=c00000b7 syscall=5 success=yes exit=0 a0=40000222fc a1=40000283d8 a2=4000026840 a3=32 items=0 ppid=1017 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:29:14.253127 kernel: audit: type=1327 audit(1712946546.994:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:29:14.253147 kernel: audit: type=1400 audit(1712946547.008:89): avc: denied { associate } for pid=1034 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Apr 12 18:29:14.253156 kernel: audit: type=1300 audit(1712946547.008:89): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000223d9 a2=1ed a3=0 items=2 ppid=1017 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:29:14.253172 kernel: audit: type=1307 audit(1712946547.008:89): cwd="/" Apr 12 18:29:14.253185 kernel: audit: type=1302 audit(1712946547.008:89): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:14.253203 kernel: audit: type=1302 audit(1712946547.008:89): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:14.253222 kernel: audit: type=1327 audit(1712946547.008:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:29:14.253231 systemd[1]: Populated /etc with preset unit settings. Apr 12 18:29:14.253240 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:29:14.253259 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:29:14.253271 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:29:14.253282 kernel: audit: type=1334 audit(1712946553.473:90): prog-id=12 op=LOAD Apr 12 18:29:14.253297 kernel: audit: type=1334 audit(1712946553.473:91): prog-id=3 op=UNLOAD Apr 12 18:29:14.253306 kernel: audit: type=1334 audit(1712946553.479:92): prog-id=13 op=LOAD Apr 12 18:29:14.253317 kernel: audit: type=1334 audit(1712946553.482:93): prog-id=14 op=LOAD Apr 12 18:29:14.253339 kernel: audit: type=1334 audit(1712946553.482:94): prog-id=4 op=UNLOAD Apr 12 18:29:14.253351 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 12 18:29:14.253365 kernel: audit: type=1334 audit(1712946553.482:95): prog-id=5 op=UNLOAD Apr 12 18:29:14.253378 systemd[1]: Stopped initrd-switch-root.service. Apr 12 18:29:14.253393 kernel: audit: type=1334 audit(1712946553.488:96): prog-id=15 op=LOAD Apr 12 18:29:14.253405 kernel: audit: type=1334 audit(1712946553.488:97): prog-id=12 op=UNLOAD Apr 12 18:29:14.253414 kernel: audit: type=1334 audit(1712946553.493:98): prog-id=16 op=LOAD Apr 12 18:29:14.253462 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 12 18:29:14.253475 kernel: audit: type=1334 audit(1712946553.499:99): prog-id=17 op=LOAD Apr 12 18:29:14.253484 systemd[1]: Created slice system-addon\x2dconfig.slice. Apr 12 18:29:14.253505 systemd[1]: Created slice system-addon\x2drun.slice. Apr 12 18:29:14.253515 systemd[1]: Created slice system-getty.slice. Apr 12 18:29:14.253535 systemd[1]: Created slice system-modprobe.slice. Apr 12 18:29:14.253544 systemd[1]: Created slice system-serial\x2dgetty.slice. Apr 12 18:29:14.253559 systemd[1]: Created slice system-system\x2dcloudinit.slice. Apr 12 18:29:14.253570 systemd[1]: Created slice system-systemd\x2dfsck.slice. Apr 12 18:29:14.253580 systemd[1]: Created slice user.slice. Apr 12 18:29:14.253592 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:29:14.253605 systemd[1]: Started systemd-ask-password-wall.path. Apr 12 18:29:14.253614 systemd[1]: Set up automount boot.automount. Apr 12 18:29:14.253627 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Apr 12 18:29:14.253637 systemd[1]: Stopped target initrd-switch-root.target. Apr 12 18:29:14.253650 systemd[1]: Stopped target initrd-fs.target. Apr 12 18:29:14.253662 systemd[1]: Stopped target initrd-root-fs.target. Apr 12 18:29:14.253673 systemd[1]: Reached target integritysetup.target. Apr 12 18:29:14.253686 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:29:14.253696 systemd[1]: Reached target remote-fs.target. Apr 12 18:29:14.253707 systemd[1]: Reached target slices.target. Apr 12 18:29:14.253721 systemd[1]: Reached target swap.target. Apr 12 18:29:14.253733 systemd[1]: Reached target torcx.target. Apr 12 18:29:14.253744 systemd[1]: Reached target veritysetup.target. Apr 12 18:29:14.253754 systemd[1]: Listening on systemd-coredump.socket. Apr 12 18:29:14.253765 systemd[1]: Listening on systemd-initctl.socket. Apr 12 18:29:14.253777 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:29:14.253790 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:29:14.253802 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:29:14.253814 systemd[1]: Listening on systemd-userdbd.socket. Apr 12 18:29:14.253825 systemd[1]: Mounting dev-hugepages.mount... Apr 12 18:29:14.253835 systemd[1]: Mounting dev-mqueue.mount... Apr 12 18:29:14.253848 systemd[1]: Mounting media.mount... Apr 12 18:29:14.253860 systemd[1]: Mounting sys-kernel-debug.mount... Apr 12 18:29:14.253870 systemd[1]: Mounting sys-kernel-tracing.mount... Apr 12 18:29:14.253883 systemd[1]: Mounting tmp.mount... Apr 12 18:29:14.253908 systemd[1]: Starting flatcar-tmpfiles.service... Apr 12 18:29:14.253920 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Apr 12 18:29:14.253932 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:29:14.253943 systemd[1]: Starting modprobe@configfs.service... Apr 12 18:29:14.253954 systemd[1]: Starting modprobe@dm_mod.service... Apr 12 18:29:14.253964 systemd[1]: Starting modprobe@drm.service... Apr 12 18:29:14.253977 systemd[1]: Starting modprobe@efi_pstore.service... Apr 12 18:29:14.253990 systemd[1]: Starting modprobe@fuse.service... Apr 12 18:29:14.254002 systemd[1]: Starting modprobe@loop.service... Apr 12 18:29:14.254015 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 12 18:29:14.254026 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 12 18:29:14.254038 systemd[1]: Stopped systemd-fsck-root.service. Apr 12 18:29:14.254049 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 12 18:29:14.254059 kernel: loop: module loaded Apr 12 18:29:14.254071 systemd[1]: Stopped systemd-fsck-usr.service. Apr 12 18:29:14.254081 systemd[1]: Stopped systemd-journald.service. Apr 12 18:29:14.254095 kernel: fuse: init (API version 7.34) Apr 12 18:29:14.254106 systemd[1]: systemd-journald.service: Consumed 3.263s CPU time. Apr 12 18:29:14.254119 systemd[1]: Starting systemd-journald.service... Apr 12 18:29:14.254129 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:29:14.254141 systemd[1]: Starting systemd-network-generator.service... Apr 12 18:29:14.254153 systemd[1]: Starting systemd-remount-fs.service... Apr 12 18:29:14.254164 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:29:14.254175 systemd[1]: verity-setup.service: Deactivated successfully. Apr 12 18:29:14.254186 systemd[1]: Stopped verity-setup.service. Apr 12 18:29:14.254201 systemd[1]: Mounted dev-hugepages.mount. Apr 12 18:29:14.254211 systemd[1]: Mounted dev-mqueue.mount. Apr 12 18:29:14.254226 systemd[1]: Mounted media.mount. Apr 12 18:29:14.254238 systemd[1]: Mounted sys-kernel-debug.mount. Apr 12 18:29:14.254249 systemd[1]: Mounted sys-kernel-tracing.mount. Apr 12 18:29:14.254260 systemd[1]: Mounted tmp.mount. Apr 12 18:29:14.254272 systemd[1]: Finished flatcar-tmpfiles.service. Apr 12 18:29:14.254288 systemd-journald[1140]: Journal started Apr 12 18:29:14.254350 systemd-journald[1140]: Runtime Journal (/run/log/journal/4d59f01712ff41c194560f24af878ba9) is 8.0M, max 78.6M, 70.6M free. Apr 12 18:29:04.948000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 12 18:29:05.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:29:05.665000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:29:05.665000 audit: BPF prog-id=10 op=LOAD Apr 12 18:29:05.665000 audit: BPF prog-id=10 op=UNLOAD Apr 12 18:29:05.665000 audit: BPF prog-id=11 op=LOAD Apr 12 18:29:05.665000 audit: BPF prog-id=11 op=UNLOAD Apr 12 18:29:06.994000 audit[1034]: AVC avc: denied { associate } for pid=1034 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Apr 12 18:29:06.994000 audit[1034]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40000222fc a1=40000283d8 a2=4000026840 a3=32 items=0 ppid=1017 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:29:06.994000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:29:07.008000 audit[1034]: AVC avc: denied { associate } for pid=1034 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Apr 12 18:29:07.008000 audit[1034]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000223d9 a2=1ed a3=0 items=2 ppid=1017 pid=1034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:29:07.008000 audit: CWD cwd="/" Apr 12 18:29:07.008000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:07.008000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:07.008000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:29:13.473000 audit: BPF prog-id=12 op=LOAD Apr 12 18:29:13.473000 audit: BPF prog-id=3 op=UNLOAD Apr 12 18:29:13.479000 audit: BPF prog-id=13 op=LOAD Apr 12 18:29:13.482000 audit: BPF prog-id=14 op=LOAD Apr 12 18:29:13.482000 audit: BPF prog-id=4 op=UNLOAD Apr 12 18:29:13.482000 audit: BPF prog-id=5 op=UNLOAD Apr 12 18:29:13.488000 audit: BPF prog-id=15 op=LOAD Apr 12 18:29:13.488000 audit: BPF prog-id=12 op=UNLOAD Apr 12 18:29:13.493000 audit: BPF prog-id=16 op=LOAD Apr 12 18:29:13.499000 audit: BPF prog-id=17 op=LOAD Apr 12 18:29:13.499000 audit: BPF prog-id=13 op=UNLOAD Apr 12 18:29:13.499000 audit: BPF prog-id=14 op=UNLOAD Apr 12 18:29:13.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:13.522000 audit: BPF prog-id=15 op=UNLOAD Apr 12 18:29:13.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:13.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.102000 audit: BPF prog-id=18 op=LOAD Apr 12 18:29:14.102000 audit: BPF prog-id=19 op=LOAD Apr 12 18:29:14.102000 audit: BPF prog-id=20 op=LOAD Apr 12 18:29:14.102000 audit: BPF prog-id=16 op=UNLOAD Apr 12 18:29:14.102000 audit: BPF prog-id=17 op=UNLOAD Apr 12 18:29:14.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.249000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 12 18:29:14.249000 audit[1140]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd21b2820 a2=4000 a3=1 items=0 ppid=1 pid=1140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:29:14.249000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 12 18:29:13.472003 systemd[1]: Queued start job for default target multi-user.target. Apr 12 18:29:06.931933 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:29:13.500472 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 12 18:29:06.945293 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:06Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:29:13.500857 systemd[1]: systemd-journald.service: Consumed 3.263s CPU time. Apr 12 18:29:06.945313 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:06Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:29:06.945376 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:06Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Apr 12 18:29:06.945387 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:06Z" level=debug msg="skipped missing lower profile" missing profile=oem Apr 12 18:29:06.945422 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:06Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Apr 12 18:29:06.945439 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:06Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Apr 12 18:29:06.945653 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:06Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Apr 12 18:29:06.945688 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:06Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:29:06.945700 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:06Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:29:06.970966 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:06Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Apr 12 18:29:06.971023 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:06Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Apr 12 18:29:06.971049 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:06Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.3: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.3 Apr 12 18:29:06.971065 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:06Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Apr 12 18:29:06.971088 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:06Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.3: no such file or directory" path=/var/lib/torcx/store/3510.3.3 Apr 12 18:29:06.971101 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:06Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Apr 12 18:29:12.226774 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:12Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:29:12.227048 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:12Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:29:12.227150 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:12Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:29:12.227324 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:12Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:29:12.227389 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:12Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Apr 12 18:29:12.227447 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-04-12T18:29:12Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Apr 12 18:29:14.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.269122 systemd[1]: Started systemd-journald.service. Apr 12 18:29:14.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.269833 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:29:14.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.274707 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 12 18:29:14.274849 systemd[1]: Finished modprobe@configfs.service. Apr 12 18:29:14.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.279914 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 12 18:29:14.280145 systemd[1]: Finished modprobe@dm_mod.service. Apr 12 18:29:14.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.284971 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 12 18:29:14.285134 systemd[1]: Finished modprobe@drm.service. Apr 12 18:29:14.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.289682 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 12 18:29:14.289864 systemd[1]: Finished modprobe@efi_pstore.service. Apr 12 18:29:14.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.295362 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 12 18:29:14.295491 systemd[1]: Finished modprobe@fuse.service. Apr 12 18:29:14.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.300406 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 12 18:29:14.300552 systemd[1]: Finished modprobe@loop.service. Apr 12 18:29:14.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.305227 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:29:14.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.310557 systemd[1]: Finished systemd-network-generator.service. Apr 12 18:29:14.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.316199 systemd[1]: Finished systemd-remount-fs.service. Apr 12 18:29:14.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.321635 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:29:14.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.327590 systemd[1]: Reached target network-pre.target. Apr 12 18:29:14.333567 systemd[1]: Mounting sys-fs-fuse-connections.mount... Apr 12 18:29:14.339159 systemd[1]: Mounting sys-kernel-config.mount... Apr 12 18:29:14.346963 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 12 18:29:14.361812 systemd[1]: Starting systemd-hwdb-update.service... Apr 12 18:29:14.367237 systemd[1]: Starting systemd-journal-flush.service... Apr 12 18:29:14.371466 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 12 18:29:14.372741 systemd[1]: Starting systemd-random-seed.service... Apr 12 18:29:14.377169 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Apr 12 18:29:14.378559 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:29:14.386465 systemd[1]: Starting systemd-sysusers.service... Apr 12 18:29:14.392267 systemd[1]: Starting systemd-udev-settle.service... Apr 12 18:29:14.403733 systemd[1]: Mounted sys-fs-fuse-connections.mount. Apr 12 18:29:14.408713 systemd[1]: Mounted sys-kernel-config.mount. Apr 12 18:29:14.417546 udevadm[1154]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 12 18:29:14.426227 systemd[1]: Finished systemd-random-seed.service. Apr 12 18:29:14.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.431211 systemd[1]: Reached target first-boot-complete.target. Apr 12 18:29:14.465737 systemd-journald[1140]: Time spent on flushing to /var/log/journal/4d59f01712ff41c194560f24af878ba9 is 15.234ms for 1136 entries. Apr 12 18:29:14.465737 systemd-journald[1140]: System Journal (/var/log/journal/4d59f01712ff41c194560f24af878ba9) is 8.0M, max 2.6G, 2.6G free. Apr 12 18:29:14.547587 systemd-journald[1140]: Received client request to flush runtime journal. Apr 12 18:29:14.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.489228 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:29:14.548745 systemd[1]: Finished systemd-journal-flush.service. Apr 12 18:29:14.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.862468 systemd[1]: Finished systemd-sysusers.service. Apr 12 18:29:14.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:15.501715 systemd[1]: Finished systemd-hwdb-update.service. Apr 12 18:29:15.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:15.508000 audit: BPF prog-id=21 op=LOAD Apr 12 18:29:15.508000 audit: BPF prog-id=22 op=LOAD Apr 12 18:29:15.508000 audit: BPF prog-id=7 op=UNLOAD Apr 12 18:29:15.509000 audit: BPF prog-id=8 op=UNLOAD Apr 12 18:29:15.510071 systemd[1]: Starting systemd-udevd.service... Apr 12 18:29:15.530276 systemd-udevd[1157]: Using default interface naming scheme 'v252'. Apr 12 18:29:15.776460 systemd[1]: Started systemd-udevd.service. Apr 12 18:29:15.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:15.786000 audit: BPF prog-id=23 op=LOAD Apr 12 18:29:15.789387 systemd[1]: Starting systemd-networkd.service... Apr 12 18:29:15.832015 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Apr 12 18:29:15.875000 audit: BPF prog-id=24 op=LOAD Apr 12 18:29:15.876000 audit: BPF prog-id=25 op=LOAD Apr 12 18:29:15.876000 audit: BPF prog-id=26 op=LOAD Apr 12 18:29:15.877285 systemd[1]: Starting systemd-userdbd.service... Apr 12 18:29:15.880000 audit[1168]: AVC avc: denied { confidentiality } for pid=1168 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Apr 12 18:29:15.928082 kernel: hv_vmbus: registering driver hv_balloon Apr 12 18:29:15.928665 kernel: hv_vmbus: registering driver hyperv_fb Apr 12 18:29:15.928691 kernel: mousedev: PS/2 mouse device common for all mice Apr 12 18:29:15.928720 kernel: hv_utils: Registering HyperV Utility Driver Apr 12 18:29:15.928736 kernel: hv_vmbus: registering driver hv_utils Apr 12 18:29:15.934240 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 12 18:29:15.945094 kernel: hv_utils: Heartbeat IC version 3.0 Apr 12 18:29:15.945165 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 12 18:29:15.945191 kernel: hv_utils: Shutdown IC version 3.2 Apr 12 18:29:15.949396 kernel: hv_utils: TimeSync IC version 4.0 Apr 12 18:29:15.949513 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 12 18:29:15.949562 kernel: hv_balloon: Memory hot add disabled on ARM64 Apr 12 18:29:15.692157 kernel: Console: switching to colour dummy device 80x25 Apr 12 18:29:15.734725 systemd-journald[1140]: Time jumped backwards, rotating. Apr 12 18:29:15.734805 kernel: Console: switching to colour frame buffer device 128x48 Apr 12 18:29:15.880000 audit[1168]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaaf45472e0 a1=aa2c a2=ffff86e924b0 a3=aaaaf44a4010 items=12 ppid=1157 pid=1168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:29:15.880000 audit: CWD cwd="/" Apr 12 18:29:15.880000 audit: PATH item=0 name=(null) inode=6803 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:15.880000 audit: PATH item=1 name=(null) inode=11291 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:15.880000 audit: PATH item=2 name=(null) inode=11291 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:15.880000 audit: PATH item=3 name=(null) inode=11292 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:15.880000 audit: PATH item=4 name=(null) inode=11291 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:15.880000 audit: PATH item=5 name=(null) inode=11293 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:15.880000 audit: PATH item=6 name=(null) inode=11291 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:15.880000 audit: PATH item=7 name=(null) inode=11294 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:15.880000 audit: PATH item=8 name=(null) inode=11291 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:15.880000 audit: PATH item=9 name=(null) inode=11295 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:15.880000 audit: PATH item=10 name=(null) inode=11291 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:15.880000 audit: PATH item=11 name=(null) inode=11296 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:15.880000 audit: PROCTITLE proctitle="(udev-worker)" Apr 12 18:29:15.812854 systemd[1]: Started systemd-userdbd.service. Apr 12 18:29:15.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:15.929621 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1165) Apr 12 18:29:15.954941 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:29:15.964178 systemd[1]: Finished systemd-udev-settle.service. Apr 12 18:29:15.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:15.971219 systemd[1]: Starting lvm2-activation-early.service... Apr 12 18:29:16.104074 systemd-networkd[1175]: lo: Link UP Apr 12 18:29:16.104337 systemd-networkd[1175]: lo: Gained carrier Apr 12 18:29:16.104881 systemd-networkd[1175]: Enumeration completed Apr 12 18:29:16.105075 systemd[1]: Started systemd-networkd.service. Apr 12 18:29:16.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:16.111281 systemd[1]: Starting systemd-networkd-wait-online.service... Apr 12 18:29:16.129408 systemd-networkd[1175]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:29:16.178602 kernel: mlx5_core 599c:00:02.0 enP22940s1: Link up Apr 12 18:29:16.200256 lvm[1234]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:29:16.205609 kernel: hv_netvsc 002248bc-42bb-0022-48bc-42bb002248bc eth0: Data path switched to VF: enP22940s1 Apr 12 18:29:16.207235 systemd-networkd[1175]: enP22940s1: Link UP Apr 12 18:29:16.207600 systemd-networkd[1175]: eth0: Link UP Apr 12 18:29:16.207611 systemd-networkd[1175]: eth0: Gained carrier Apr 12 18:29:16.212122 systemd-networkd[1175]: enP22940s1: Gained carrier Apr 12 18:29:16.218700 systemd-networkd[1175]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 12 18:29:16.238604 systemd[1]: Finished lvm2-activation-early.service. Apr 12 18:29:16.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:16.244318 systemd[1]: Reached target cryptsetup.target. Apr 12 18:29:16.250057 systemd[1]: Starting lvm2-activation.service... Apr 12 18:29:16.254429 lvm[1237]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:29:16.280547 systemd[1]: Finished lvm2-activation.service. Apr 12 18:29:16.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:16.285056 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:29:16.289432 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 12 18:29:16.289461 systemd[1]: Reached target local-fs.target. Apr 12 18:29:16.293810 systemd[1]: Reached target machines.target. Apr 12 18:29:16.299458 systemd[1]: Starting ldconfig.service... Apr 12 18:29:16.314561 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Apr 12 18:29:16.314659 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:29:16.316025 systemd[1]: Starting systemd-boot-update.service... Apr 12 18:29:16.321713 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Apr 12 18:29:16.328385 systemd[1]: Starting systemd-machine-id-commit.service... Apr 12 18:29:16.333104 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:29:16.333166 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:29:16.334492 systemd[1]: Starting systemd-tmpfiles-setup.service... Apr 12 18:29:16.357174 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1239 (bootctl) Apr 12 18:29:16.362785 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Apr 12 18:29:17.015300 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Apr 12 18:29:17.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:17.077222 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Apr 12 18:29:17.185166 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 12 18:29:17.186368 systemd[1]: Finished systemd-machine-id-commit.service. Apr 12 18:29:17.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:17.258472 systemd-fsck[1247]: fsck.fat 4.2 (2021-01-31) Apr 12 18:29:17.258472 systemd-fsck[1247]: /dev/sda1: 236 files, 117047/258078 clusters Apr 12 18:29:17.258932 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Apr 12 18:29:17.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:17.266876 systemd[1]: Mounting boot.mount... Apr 12 18:29:17.280007 systemd[1]: Mounted boot.mount. Apr 12 18:29:17.291685 systemd[1]: Finished systemd-boot-update.service. Apr 12 18:29:17.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:17.339531 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 12 18:29:17.416264 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 12 18:29:17.843740 systemd-networkd[1175]: eth0: Gained IPv6LL Apr 12 18:29:17.848639 systemd[1]: Finished systemd-networkd-wait-online.service. Apr 12 18:29:17.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.050095 systemd[1]: Finished systemd-tmpfiles-setup.service. Apr 12 18:29:18.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.056591 systemd[1]: Starting audit-rules.service... Apr 12 18:29:18.062405 systemd[1]: Starting clean-ca-certificates.service... Apr 12 18:29:18.068198 systemd[1]: Starting systemd-journal-catalog-update.service... Apr 12 18:29:18.073000 audit: BPF prog-id=27 op=LOAD Apr 12 18:29:18.075431 systemd[1]: Starting systemd-resolved.service... Apr 12 18:29:18.081000 audit: BPF prog-id=28 op=LOAD Apr 12 18:29:18.082954 systemd[1]: Starting systemd-timesyncd.service... Apr 12 18:29:18.088552 systemd[1]: Starting systemd-update-utmp.service... Apr 12 18:29:18.125143 systemd[1]: Finished clean-ca-certificates.service. Apr 12 18:29:18.126000 audit[1259]: SYSTEM_BOOT pid=1259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.132142 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 12 18:29:18.133907 systemd[1]: Finished systemd-update-utmp.service. Apr 12 18:29:18.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.169932 systemd[1]: Started systemd-timesyncd.service. Apr 12 18:29:18.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.174621 systemd[1]: Reached target time-set.target. Apr 12 18:29:18.184983 systemd[1]: Finished systemd-journal-catalog-update.service. Apr 12 18:29:18.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.236806 systemd-resolved[1256]: Positive Trust Anchors: Apr 12 18:29:18.236818 systemd-resolved[1256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:29:18.236844 systemd-resolved[1256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:29:18.241353 systemd-resolved[1256]: Using system hostname 'ci-3510.3.3-a-63b2983992'. Apr 12 18:29:18.242904 systemd[1]: Started systemd-resolved.service. Apr 12 18:29:18.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.249706 systemd[1]: Reached target network.target. Apr 12 18:29:18.251579 kernel: kauditd_printk_skb: 86 callbacks suppressed Apr 12 18:29:18.251624 kernel: audit: type=1130 audit(1712946558.247:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.272067 systemd[1]: Reached target network-online.target. Apr 12 18:29:18.277526 systemd[1]: Reached target nss-lookup.target. Apr 12 18:29:18.444911 systemd-timesyncd[1258]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Apr 12 18:29:18.444971 systemd-timesyncd[1258]: Initial clock synchronization to Fri 2024-04-12 18:29:18.445891 UTC. Apr 12 18:29:18.572000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 18:29:18.573756 augenrules[1274]: No rules Apr 12 18:29:18.572000 audit[1274]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc20664b0 a2=420 a3=0 items=0 ppid=1253 pid=1274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:29:18.584492 systemd[1]: Finished audit-rules.service. Apr 12 18:29:18.607326 kernel: audit: type=1305 audit(1712946558.572:170): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 18:29:18.607468 kernel: audit: type=1300 audit(1712946558.572:170): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc20664b0 a2=420 a3=0 items=0 ppid=1253 pid=1274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:29:18.572000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 12 18:29:18.617653 kernel: audit: type=1327 audit(1712946558.572:170): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 12 18:29:23.949309 ldconfig[1238]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 12 18:29:23.957270 systemd[1]: Finished ldconfig.service. Apr 12 18:29:23.963521 systemd[1]: Starting systemd-update-done.service... Apr 12 18:29:23.987430 systemd[1]: Finished systemd-update-done.service. Apr 12 18:29:23.992489 systemd[1]: Reached target sysinit.target. Apr 12 18:29:23.996895 systemd[1]: Started motdgen.path. Apr 12 18:29:24.000716 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Apr 12 18:29:24.006957 systemd[1]: Started logrotate.timer. Apr 12 18:29:24.011129 systemd[1]: Started mdadm.timer. Apr 12 18:29:24.014837 systemd[1]: Started systemd-tmpfiles-clean.timer. Apr 12 18:29:24.019429 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 12 18:29:24.019464 systemd[1]: Reached target paths.target. Apr 12 18:29:24.023992 systemd[1]: Reached target timers.target. Apr 12 18:29:24.028534 systemd[1]: Listening on dbus.socket. Apr 12 18:29:24.033599 systemd[1]: Starting docker.socket... Apr 12 18:29:24.051131 systemd[1]: Listening on sshd.socket. Apr 12 18:29:24.055477 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:29:24.056030 systemd[1]: Listening on docker.socket. Apr 12 18:29:24.060509 systemd[1]: Reached target sockets.target. Apr 12 18:29:24.064884 systemd[1]: Reached target basic.target. Apr 12 18:29:24.069042 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:29:24.069071 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:29:24.070313 systemd[1]: Starting containerd.service... Apr 12 18:29:24.075188 systemd[1]: Starting dbus.service... Apr 12 18:29:24.079709 systemd[1]: Starting enable-oem-cloudinit.service... Apr 12 18:29:24.085159 systemd[1]: Starting extend-filesystems.service... Apr 12 18:29:24.092653 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Apr 12 18:29:24.093978 systemd[1]: Starting motdgen.service... Apr 12 18:29:24.098885 systemd[1]: Started nvidia.service. Apr 12 18:29:24.104237 systemd[1]: Starting prepare-cni-plugins.service... Apr 12 18:29:24.110176 systemd[1]: Starting prepare-critools.service... Apr 12 18:29:24.115722 systemd[1]: Starting prepare-helm.service... Apr 12 18:29:24.120979 systemd[1]: Starting ssh-key-proc-cmdline.service... Apr 12 18:29:24.126453 systemd[1]: Starting sshd-keygen.service... Apr 12 18:29:24.137873 jq[1284]: false Apr 12 18:29:24.132867 systemd[1]: Starting systemd-logind.service... Apr 12 18:29:24.138638 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:29:24.138711 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 12 18:29:24.139203 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 12 18:29:24.140040 systemd[1]: Starting update-engine.service... Apr 12 18:29:24.145993 systemd[1]: Starting update-ssh-keys-after-ignition.service... Apr 12 18:29:24.151124 jq[1302]: true Apr 12 18:29:24.154040 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 12 18:29:24.155141 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Apr 12 18:29:24.159240 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 12 18:29:24.159888 systemd[1]: Finished ssh-key-proc-cmdline.service. Apr 12 18:29:24.174259 extend-filesystems[1285]: Found sda Apr 12 18:29:24.174259 extend-filesystems[1285]: Found sda1 Apr 12 18:29:24.174259 extend-filesystems[1285]: Found sda2 Apr 12 18:29:24.174259 extend-filesystems[1285]: Found sda3 Apr 12 18:29:24.174259 extend-filesystems[1285]: Found usr Apr 12 18:29:24.174259 extend-filesystems[1285]: Found sda4 Apr 12 18:29:24.174259 extend-filesystems[1285]: Found sda6 Apr 12 18:29:24.174259 extend-filesystems[1285]: Found sda7 Apr 12 18:29:24.174259 extend-filesystems[1285]: Found sda9 Apr 12 18:29:24.174259 extend-filesystems[1285]: Checking size of /dev/sda9 Apr 12 18:29:24.230810 jq[1309]: true Apr 12 18:29:24.193958 systemd[1]: motdgen.service: Deactivated successfully. Apr 12 18:29:24.194159 systemd[1]: Finished motdgen.service. Apr 12 18:29:24.251335 env[1311]: time="2024-04-12T18:29:24.251267010Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Apr 12 18:29:24.267335 systemd-logind[1298]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Apr 12 18:29:24.269272 systemd-logind[1298]: New seat seat0. Apr 12 18:29:24.278208 extend-filesystems[1285]: Old size kept for /dev/sda9 Apr 12 18:29:24.289542 extend-filesystems[1285]: Found sr0 Apr 12 18:29:24.283162 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 12 18:29:24.283345 systemd[1]: Finished extend-filesystems.service. Apr 12 18:29:24.299382 env[1311]: time="2024-04-12T18:29:24.299332701Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 12 18:29:24.299531 env[1311]: time="2024-04-12T18:29:24.299502269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:29:24.302075 env[1311]: time="2024-04-12T18:29:24.302012833Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.154-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:29:24.302075 env[1311]: time="2024-04-12T18:29:24.302065596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:29:24.302369 env[1311]: time="2024-04-12T18:29:24.302333649Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:29:24.302369 env[1311]: time="2024-04-12T18:29:24.302361650Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 12 18:29:24.302443 env[1311]: time="2024-04-12T18:29:24.302376411Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Apr 12 18:29:24.302443 env[1311]: time="2024-04-12T18:29:24.302386251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 12 18:29:24.302487 env[1311]: time="2024-04-12T18:29:24.302464215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:29:24.302799 env[1311]: time="2024-04-12T18:29:24.302765390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:29:24.303568 env[1311]: time="2024-04-12T18:29:24.303516387Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:29:24.309561 tar[1306]: ./ Apr 12 18:29:24.309561 tar[1306]: ./loopback Apr 12 18:29:24.313309 tar[1307]: crictl Apr 12 18:29:24.314399 tar[1308]: linux-arm64/helm Apr 12 18:29:24.329115 env[1311]: time="2024-04-12T18:29:24.303568790Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 12 18:29:24.330666 env[1311]: time="2024-04-12T18:29:24.330286427Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Apr 12 18:29:24.330666 env[1311]: time="2024-04-12T18:29:24.330624684Z" level=info msg="metadata content store policy set" policy=shared Apr 12 18:29:24.332133 dbus-daemon[1283]: [system] SELinux support is enabled Apr 12 18:29:24.344364 bash[1341]: Updated "/home/core/.ssh/authorized_keys" Apr 12 18:29:24.332316 systemd[1]: Started dbus.service. Apr 12 18:29:24.337959 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 12 18:29:24.338000 systemd[1]: Reached target system-config.target. Apr 12 18:29:24.345611 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 12 18:29:24.345636 systemd[1]: Reached target user-config.target. Apr 12 18:29:24.353787 systemd[1]: Finished update-ssh-keys-after-ignition.service. Apr 12 18:29:24.363473 systemd[1]: Started systemd-logind.service. Apr 12 18:29:24.368189 env[1311]: time="2024-04-12T18:29:24.367860801Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 12 18:29:24.368189 env[1311]: time="2024-04-12T18:29:24.367926764Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 12 18:29:24.368189 env[1311]: time="2024-04-12T18:29:24.367942005Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 12 18:29:24.368189 env[1311]: time="2024-04-12T18:29:24.368064491Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 12 18:29:24.368189 env[1311]: time="2024-04-12T18:29:24.368093332Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 12 18:29:24.368189 env[1311]: time="2024-04-12T18:29:24.368110253Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 12 18:29:24.368189 env[1311]: time="2024-04-12T18:29:24.368130414Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 12 18:29:24.368752 env[1311]: time="2024-04-12T18:29:24.368621638Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 12 18:29:24.368752 env[1311]: time="2024-04-12T18:29:24.368657960Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Apr 12 18:29:24.368752 env[1311]: time="2024-04-12T18:29:24.368673641Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 12 18:29:24.368752 env[1311]: time="2024-04-12T18:29:24.368686922Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 12 18:29:24.368752 env[1311]: time="2024-04-12T18:29:24.368701002Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 12 18:29:24.368909 env[1311]: time="2024-04-12T18:29:24.368897932Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 12 18:29:24.369306 env[1311]: time="2024-04-12T18:29:24.368997337Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 12 18:29:24.369306 env[1311]: time="2024-04-12T18:29:24.369297272Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 12 18:29:24.369237 dbus-daemon[1283]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 12 18:29:24.369409 env[1311]: time="2024-04-12T18:29:24.369339234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 12 18:29:24.369409 env[1311]: time="2024-04-12T18:29:24.369357075Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 12 18:29:24.369459 env[1311]: time="2024-04-12T18:29:24.369418478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 12 18:29:24.369459 env[1311]: time="2024-04-12T18:29:24.369433158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 12 18:29:24.369459 env[1311]: time="2024-04-12T18:29:24.369446359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 12 18:29:24.369514 env[1311]: time="2024-04-12T18:29:24.369458800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 12 18:29:24.370158 env[1311]: time="2024-04-12T18:29:24.369537444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 12 18:29:24.370158 env[1311]: time="2024-04-12T18:29:24.369629088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 12 18:29:24.370158 env[1311]: time="2024-04-12T18:29:24.369644729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 12 18:29:24.370158 env[1311]: time="2024-04-12T18:29:24.369660050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 12 18:29:24.370158 env[1311]: time="2024-04-12T18:29:24.369678971Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 12 18:29:24.375638 env[1311]: time="2024-04-12T18:29:24.373831415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 12 18:29:24.375806 env[1311]: time="2024-04-12T18:29:24.375648385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 12 18:29:24.375806 env[1311]: time="2024-04-12T18:29:24.375689227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 12 18:29:24.375806 env[1311]: time="2024-04-12T18:29:24.375703748Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 12 18:29:24.375806 env[1311]: time="2024-04-12T18:29:24.375722469Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Apr 12 18:29:24.375806 env[1311]: time="2024-04-12T18:29:24.375738949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 12 18:29:24.376649 env[1311]: time="2024-04-12T18:29:24.376608672Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Apr 12 18:29:24.376750 env[1311]: time="2024-04-12T18:29:24.376661235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 12 18:29:24.378468 env[1311]: time="2024-04-12T18:29:24.378371879Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 12 18:29:24.378468 env[1311]: time="2024-04-12T18:29:24.378464604Z" level=info msg="Connect containerd service" Apr 12 18:29:24.395998 env[1311]: time="2024-04-12T18:29:24.379501255Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 12 18:29:24.395998 env[1311]: time="2024-04-12T18:29:24.390432194Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:29:24.395998 env[1311]: time="2024-04-12T18:29:24.390625684Z" level=info msg="Start subscribing containerd event" Apr 12 18:29:24.395998 env[1311]: time="2024-04-12T18:29:24.390683327Z" level=info msg="Start recovering state" Apr 12 18:29:24.395998 env[1311]: time="2024-04-12T18:29:24.390787252Z" level=info msg="Start event monitor" Apr 12 18:29:24.395998 env[1311]: time="2024-04-12T18:29:24.390789932Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 12 18:29:24.395998 env[1311]: time="2024-04-12T18:29:24.390808173Z" level=info msg="Start snapshots syncer" Apr 12 18:29:24.395998 env[1311]: time="2024-04-12T18:29:24.390821613Z" level=info msg="Start cni network conf syncer for default" Apr 12 18:29:24.395998 env[1311]: time="2024-04-12T18:29:24.390830254Z" level=info msg="Start streaming server" Apr 12 18:29:24.395998 env[1311]: time="2024-04-12T18:29:24.390843814Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 12 18:29:24.395998 env[1311]: time="2024-04-12T18:29:24.390906658Z" level=info msg="containerd successfully booted in 0.144949s" Apr 12 18:29:24.390990 systemd[1]: Started containerd.service. Apr 12 18:29:24.480718 tar[1306]: ./bandwidth Apr 12 18:29:24.482657 systemd[1]: nvidia.service: Deactivated successfully. Apr 12 18:29:24.526109 tar[1306]: ./ptp Apr 12 18:29:24.568877 tar[1306]: ./vlan Apr 12 18:29:24.611053 tar[1306]: ./host-device Apr 12 18:29:24.650841 tar[1306]: ./tuning Apr 12 18:29:24.686739 tar[1306]: ./vrf Apr 12 18:29:24.723675 tar[1306]: ./sbr Apr 12 18:29:24.759448 tar[1306]: ./tap Apr 12 18:29:24.802144 tar[1306]: ./dhcp Apr 12 18:29:24.907361 tar[1306]: ./static Apr 12 18:29:24.911179 update_engine[1300]: I0412 18:29:24.899420 1300 main.cc:92] Flatcar Update Engine starting Apr 12 18:29:24.937771 tar[1306]: ./firewall Apr 12 18:29:24.966068 systemd[1]: Started update-engine.service. Apr 12 18:29:25.003797 update_engine[1300]: I0412 18:29:24.966102 1300 update_check_scheduler.cc:74] Next update check in 6m49s Apr 12 18:29:25.003874 tar[1306]: ./macvlan Apr 12 18:29:24.973724 systemd[1]: Started locksmithd.service. Apr 12 18:29:25.039365 tar[1306]: ./dummy Apr 12 18:29:25.080072 tar[1306]: ./bridge Apr 12 18:29:25.128178 tar[1306]: ./ipvlan Apr 12 18:29:25.167142 tar[1306]: ./portmap Apr 12 18:29:25.206217 tar[1306]: ./host-local Apr 12 18:29:25.319999 systemd[1]: Finished prepare-cni-plugins.service. Apr 12 18:29:25.339478 systemd[1]: Finished prepare-critools.service. Apr 12 18:29:25.347915 tar[1308]: linux-arm64/LICENSE Apr 12 18:29:25.347915 tar[1308]: linux-arm64/README.md Apr 12 18:29:25.352453 systemd[1]: Finished prepare-helm.service. Apr 12 18:29:26.487402 sshd_keygen[1303]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 12 18:29:26.506611 systemd[1]: Finished sshd-keygen.service. Apr 12 18:29:26.512854 systemd[1]: Starting issuegen.service... Apr 12 18:29:26.518051 systemd[1]: Started waagent.service. Apr 12 18:29:26.523236 systemd[1]: issuegen.service: Deactivated successfully. Apr 12 18:29:26.523430 systemd[1]: Finished issuegen.service. Apr 12 18:29:26.529380 systemd[1]: Starting systemd-user-sessions.service... Apr 12 18:29:26.569507 locksmithd[1388]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 12 18:29:26.572501 systemd[1]: Finished systemd-user-sessions.service. Apr 12 18:29:26.580613 systemd[1]: Started getty@tty1.service. Apr 12 18:29:26.586918 systemd[1]: Started serial-getty@ttyAMA0.service. Apr 12 18:29:26.592101 systemd[1]: Reached target getty.target. Apr 12 18:29:26.596781 systemd[1]: Reached target multi-user.target. Apr 12 18:29:26.603088 systemd[1]: Starting systemd-update-utmp-runlevel.service... Apr 12 18:29:26.612511 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Apr 12 18:29:26.612727 systemd[1]: Finished systemd-update-utmp-runlevel.service. Apr 12 18:29:26.618404 systemd[1]: Startup finished in 742ms (kernel) + 14.920s (initrd) + 22.326s (userspace) = 37.989s. Apr 12 18:29:27.178269 login[1410]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 12 18:29:27.179752 login[1411]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 12 18:29:27.251155 systemd[1]: Created slice user-500.slice. Apr 12 18:29:27.252361 systemd[1]: Starting user-runtime-dir@500.service... Apr 12 18:29:27.255405 systemd-logind[1298]: New session 1 of user core. Apr 12 18:29:27.258327 systemd-logind[1298]: New session 2 of user core. Apr 12 18:29:27.286820 systemd[1]: Finished user-runtime-dir@500.service. Apr 12 18:29:27.288427 systemd[1]: Starting user@500.service... Apr 12 18:29:27.318199 (systemd)[1414]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:27.551251 systemd[1414]: Queued start job for default target default.target. Apr 12 18:29:27.552536 systemd[1414]: Reached target paths.target. Apr 12 18:29:27.552747 systemd[1414]: Reached target sockets.target. Apr 12 18:29:27.552762 systemd[1414]: Reached target timers.target. Apr 12 18:29:27.552774 systemd[1414]: Reached target basic.target. Apr 12 18:29:27.552827 systemd[1414]: Reached target default.target. Apr 12 18:29:27.552853 systemd[1414]: Startup finished in 227ms. Apr 12 18:29:27.553136 systemd[1]: Started user@500.service. Apr 12 18:29:27.554714 systemd[1]: Started session-1.scope. Apr 12 18:29:27.556048 systemd[1]: Started session-2.scope. Apr 12 18:29:32.789556 waagent[1407]: 2024-04-12T18:29:32.789426Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Apr 12 18:29:32.797045 waagent[1407]: 2024-04-12T18:29:32.796941Z INFO Daemon Daemon OS: flatcar 3510.3.3 Apr 12 18:29:32.801892 waagent[1407]: 2024-04-12T18:29:32.801800Z INFO Daemon Daemon Python: 3.9.16 Apr 12 18:29:32.808846 waagent[1407]: 2024-04-12T18:29:32.808725Z INFO Daemon Daemon Run daemon Apr 12 18:29:32.813847 waagent[1407]: 2024-04-12T18:29:32.813751Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.3' Apr 12 18:29:32.833095 waagent[1407]: 2024-04-12T18:29:32.832933Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Apr 12 18:29:32.849688 waagent[1407]: 2024-04-12T18:29:32.849490Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 12 18:29:32.859636 waagent[1407]: 2024-04-12T18:29:32.859515Z INFO Daemon Daemon cloud-init is enabled: False Apr 12 18:29:32.865082 waagent[1407]: 2024-04-12T18:29:32.864974Z INFO Daemon Daemon Using waagent for provisioning Apr 12 18:29:32.871160 waagent[1407]: 2024-04-12T18:29:32.871067Z INFO Daemon Daemon Activate resource disk Apr 12 18:29:32.875979 waagent[1407]: 2024-04-12T18:29:32.875883Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 12 18:29:32.890410 waagent[1407]: 2024-04-12T18:29:32.890304Z INFO Daemon Daemon Found device: None Apr 12 18:29:32.895238 waagent[1407]: 2024-04-12T18:29:32.895138Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 12 18:29:32.903990 waagent[1407]: 2024-04-12T18:29:32.903887Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 12 18:29:32.916440 waagent[1407]: 2024-04-12T18:29:32.916346Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 12 18:29:32.922838 waagent[1407]: 2024-04-12T18:29:32.922735Z INFO Daemon Daemon Running default provisioning handler Apr 12 18:29:32.937012 waagent[1407]: 2024-04-12T18:29:32.936840Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Apr 12 18:29:32.952672 waagent[1407]: 2024-04-12T18:29:32.952487Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 12 18:29:32.962477 waagent[1407]: 2024-04-12T18:29:32.962383Z INFO Daemon Daemon cloud-init is enabled: False Apr 12 18:29:32.967611 waagent[1407]: 2024-04-12T18:29:32.967497Z INFO Daemon Daemon Copying ovf-env.xml Apr 12 18:29:33.043009 waagent[1407]: 2024-04-12T18:29:33.042787Z INFO Daemon Daemon Successfully mounted dvd Apr 12 18:29:33.168455 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 12 18:29:33.215323 waagent[1407]: 2024-04-12T18:29:33.215170Z INFO Daemon Daemon Detect protocol endpoint Apr 12 18:29:33.220312 waagent[1407]: 2024-04-12T18:29:33.220218Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 12 18:29:33.226318 waagent[1407]: 2024-04-12T18:29:33.226229Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 12 18:29:33.232886 waagent[1407]: 2024-04-12T18:29:33.232798Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 12 18:29:33.238424 waagent[1407]: 2024-04-12T18:29:33.238344Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 12 18:29:33.243866 waagent[1407]: 2024-04-12T18:29:33.243782Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 12 18:29:33.345809 waagent[1407]: 2024-04-12T18:29:33.345674Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 12 18:29:33.353329 waagent[1407]: 2024-04-12T18:29:33.353273Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 12 18:29:33.359092 waagent[1407]: 2024-04-12T18:29:33.358994Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 12 18:29:33.961404 waagent[1407]: 2024-04-12T18:29:33.961235Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 12 18:29:33.977171 waagent[1407]: 2024-04-12T18:29:33.977056Z INFO Daemon Daemon Forcing an update of the goal state.. Apr 12 18:29:33.983532 waagent[1407]: 2024-04-12T18:29:33.983426Z INFO Daemon Daemon Fetching goal state [incarnation 1] Apr 12 18:29:34.064541 waagent[1407]: 2024-04-12T18:29:34.064377Z INFO Daemon Daemon Found private key matching thumbprint 61940AF45AAC499280C07AA9535BB27C0F535DDE Apr 12 18:29:34.073405 waagent[1407]: 2024-04-12T18:29:34.073297Z INFO Daemon Daemon Certificate with thumbprint 12D254CED1C52542DF9F5318FA349DD83C6C2EF1 has no matching private key. Apr 12 18:29:34.083726 waagent[1407]: 2024-04-12T18:29:34.083623Z INFO Daemon Daemon Fetch goal state completed Apr 12 18:29:34.144929 waagent[1407]: 2024-04-12T18:29:34.144859Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 40f1417f-dd61-4355-9527-5dfe8b0c611d New eTag: 13100350889210445476] Apr 12 18:29:34.155938 waagent[1407]: 2024-04-12T18:29:34.155841Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Apr 12 18:29:34.172060 waagent[1407]: 2024-04-12T18:29:34.171988Z INFO Daemon Daemon Starting provisioning Apr 12 18:29:34.177442 waagent[1407]: 2024-04-12T18:29:34.177334Z INFO Daemon Daemon Handle ovf-env.xml. Apr 12 18:29:34.182448 waagent[1407]: 2024-04-12T18:29:34.182347Z INFO Daemon Daemon Set hostname [ci-3510.3.3-a-63b2983992] Apr 12 18:29:34.238247 waagent[1407]: 2024-04-12T18:29:34.238048Z INFO Daemon Daemon Publish hostname [ci-3510.3.3-a-63b2983992] Apr 12 18:29:34.245186 waagent[1407]: 2024-04-12T18:29:34.245082Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 12 18:29:34.251978 waagent[1407]: 2024-04-12T18:29:34.251885Z INFO Daemon Daemon Primary interface is [eth0] Apr 12 18:29:34.270373 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Apr 12 18:29:34.270544 systemd[1]: Stopped systemd-networkd-wait-online.service. Apr 12 18:29:34.270627 systemd[1]: Stopping systemd-networkd-wait-online.service... Apr 12 18:29:34.270881 systemd[1]: Stopping systemd-networkd.service... Apr 12 18:29:34.276647 systemd-networkd[1175]: eth0: DHCPv6 lease lost Apr 12 18:29:34.278089 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 12 18:29:34.278277 systemd[1]: Stopped systemd-networkd.service. Apr 12 18:29:34.280423 systemd[1]: Starting systemd-networkd.service... Apr 12 18:29:34.309039 systemd-networkd[1458]: enP22940s1: Link UP Apr 12 18:29:34.309055 systemd-networkd[1458]: enP22940s1: Gained carrier Apr 12 18:29:34.310078 systemd-networkd[1458]: eth0: Link UP Apr 12 18:29:34.310091 systemd-networkd[1458]: eth0: Gained carrier Apr 12 18:29:34.310421 systemd-networkd[1458]: lo: Link UP Apr 12 18:29:34.310433 systemd-networkd[1458]: lo: Gained carrier Apr 12 18:29:34.310726 systemd-networkd[1458]: eth0: Gained IPv6LL Apr 12 18:29:34.312239 systemd-networkd[1458]: Enumeration completed Apr 12 18:29:34.312384 systemd[1]: Started systemd-networkd.service. Apr 12 18:29:34.314182 systemd-networkd[1458]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:29:34.314311 systemd[1]: Starting systemd-networkd-wait-online.service... Apr 12 18:29:34.318464 waagent[1407]: 2024-04-12T18:29:34.318294Z INFO Daemon Daemon Create user account if not exists Apr 12 18:29:34.324705 waagent[1407]: 2024-04-12T18:29:34.324545Z INFO Daemon Daemon User core already exists, skip useradd Apr 12 18:29:34.330742 waagent[1407]: 2024-04-12T18:29:34.330636Z INFO Daemon Daemon Configure sudoer Apr 12 18:29:34.335923 waagent[1407]: 2024-04-12T18:29:34.335829Z INFO Daemon Daemon Configure sshd Apr 12 18:29:34.340362 waagent[1407]: 2024-04-12T18:29:34.340260Z INFO Daemon Daemon Deploy ssh public key. Apr 12 18:29:34.340666 systemd-networkd[1458]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 12 18:29:34.354194 systemd[1]: Finished systemd-networkd-wait-online.service. Apr 12 18:29:35.569995 waagent[1407]: 2024-04-12T18:29:35.569891Z INFO Daemon Daemon Provisioning complete Apr 12 18:29:35.593930 waagent[1407]: 2024-04-12T18:29:35.593809Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 12 18:29:35.600323 waagent[1407]: 2024-04-12T18:29:35.600222Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 12 18:29:35.611092 waagent[1407]: 2024-04-12T18:29:35.610986Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Apr 12 18:29:35.945332 waagent[1467]: 2024-04-12T18:29:35.945154Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Apr 12 18:29:35.946667 waagent[1467]: 2024-04-12T18:29:35.946553Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 12 18:29:35.946982 waagent[1467]: 2024-04-12T18:29:35.946926Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 12 18:29:35.960659 waagent[1467]: 2024-04-12T18:29:35.960530Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Apr 12 18:29:35.961058 waagent[1467]: 2024-04-12T18:29:35.960999Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Apr 12 18:29:36.033648 waagent[1467]: 2024-04-12T18:29:36.033453Z INFO ExtHandler ExtHandler Found private key matching thumbprint 61940AF45AAC499280C07AA9535BB27C0F535DDE Apr 12 18:29:36.034080 waagent[1467]: 2024-04-12T18:29:36.034014Z INFO ExtHandler ExtHandler Certificate with thumbprint 12D254CED1C52542DF9F5318FA349DD83C6C2EF1 has no matching private key. Apr 12 18:29:36.034543 waagent[1467]: 2024-04-12T18:29:36.034481Z INFO ExtHandler ExtHandler Fetch goal state completed Apr 12 18:29:36.050426 waagent[1467]: 2024-04-12T18:29:36.050360Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 8a8cd32c-83a6-44fc-a509-c07f5617971c New eTag: 13100350889210445476] Apr 12 18:29:36.051278 waagent[1467]: 2024-04-12T18:29:36.051204Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Apr 12 18:29:36.147111 waagent[1467]: 2024-04-12T18:29:36.146951Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.3; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 12 18:29:36.172528 waagent[1467]: 2024-04-12T18:29:36.172427Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1467 Apr 12 18:29:36.176918 waagent[1467]: 2024-04-12T18:29:36.176811Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.3', '', 'Flatcar Container Linux by Kinvolk'] Apr 12 18:29:36.178544 waagent[1467]: 2024-04-12T18:29:36.178453Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 12 18:29:36.280523 waagent[1467]: 2024-04-12T18:29:36.280402Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 12 18:29:36.281242 waagent[1467]: 2024-04-12T18:29:36.281157Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 12 18:29:36.290216 waagent[1467]: 2024-04-12T18:29:36.290154Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 12 18:29:36.291047 waagent[1467]: 2024-04-12T18:29:36.290972Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Apr 12 18:29:36.292481 waagent[1467]: 2024-04-12T18:29:36.292402Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Apr 12 18:29:36.294273 waagent[1467]: 2024-04-12T18:29:36.294188Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 12 18:29:36.294556 waagent[1467]: 2024-04-12T18:29:36.294478Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 12 18:29:36.295207 waagent[1467]: 2024-04-12T18:29:36.295140Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 12 18:29:36.295896 waagent[1467]: 2024-04-12T18:29:36.295821Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 12 18:29:36.296412 waagent[1467]: 2024-04-12T18:29:36.296328Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 12 18:29:36.297325 waagent[1467]: 2024-04-12T18:29:36.297145Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 12 18:29:36.297501 waagent[1467]: 2024-04-12T18:29:36.297415Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 12 18:29:36.297702 waagent[1467]: 2024-04-12T18:29:36.297633Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 12 18:29:36.297702 waagent[1467]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 12 18:29:36.297702 waagent[1467]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Apr 12 18:29:36.297702 waagent[1467]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 12 18:29:36.297702 waagent[1467]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 12 18:29:36.297702 waagent[1467]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 12 18:29:36.297702 waagent[1467]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 12 18:29:36.300158 waagent[1467]: 2024-04-12T18:29:36.299950Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 12 18:29:36.301628 waagent[1467]: 2024-04-12T18:29:36.301489Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 12 18:29:36.302110 waagent[1467]: 2024-04-12T18:29:36.302022Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 12 18:29:36.302347 waagent[1467]: 2024-04-12T18:29:36.302258Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 12 18:29:36.302486 waagent[1467]: 2024-04-12T18:29:36.302417Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 12 18:29:36.303719 waagent[1467]: 2024-04-12T18:29:36.303562Z INFO EnvHandler ExtHandler Configure routes Apr 12 18:29:36.307130 waagent[1467]: 2024-04-12T18:29:36.306992Z INFO EnvHandler ExtHandler Gateway:None Apr 12 18:29:36.309378 waagent[1467]: 2024-04-12T18:29:36.309278Z INFO EnvHandler ExtHandler Routes:None Apr 12 18:29:36.320186 waagent[1467]: 2024-04-12T18:29:36.320101Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Apr 12 18:29:36.321140 waagent[1467]: 2024-04-12T18:29:36.321076Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Apr 12 18:29:36.322386 waagent[1467]: 2024-04-12T18:29:36.322311Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Apr 12 18:29:36.343283 waagent[1467]: 2024-04-12T18:29:36.343181Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1458' Apr 12 18:29:36.355656 waagent[1467]: 2024-04-12T18:29:36.355543Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Apr 12 18:29:36.418109 waagent[1467]: 2024-04-12T18:29:36.417943Z INFO MonitorHandler ExtHandler Network interfaces: Apr 12 18:29:36.418109 waagent[1467]: Executing ['ip', '-a', '-o', 'link']: Apr 12 18:29:36.418109 waagent[1467]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 12 18:29:36.418109 waagent[1467]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bc:42:bb brd ff:ff:ff:ff:ff:ff Apr 12 18:29:36.418109 waagent[1467]: 3: enP22940s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bc:42:bb brd ff:ff:ff:ff:ff:ff\ altname enP22940p0s2 Apr 12 18:29:36.418109 waagent[1467]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 12 18:29:36.418109 waagent[1467]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 12 18:29:36.418109 waagent[1467]: 2: eth0 inet 10.200.20.17/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 12 18:29:36.418109 waagent[1467]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 12 18:29:36.418109 waagent[1467]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Apr 12 18:29:36.418109 waagent[1467]: 2: eth0 inet6 fe80::222:48ff:febc:42bb/64 scope link \ valid_lft forever preferred_lft forever Apr 12 18:29:36.548692 waagent[1467]: 2024-04-12T18:29:36.548613Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.10.0.8 -- exiting Apr 12 18:29:36.614894 waagent[1407]: 2024-04-12T18:29:36.614753Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Apr 12 18:29:36.619185 waagent[1407]: 2024-04-12T18:29:36.619124Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.10.0.8 to be the latest agent Apr 12 18:29:37.896450 waagent[1496]: 2024-04-12T18:29:37.896344Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.10.0.8) Apr 12 18:29:37.897695 waagent[1496]: 2024-04-12T18:29:37.897615Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.3 Apr 12 18:29:37.897964 waagent[1496]: 2024-04-12T18:29:37.897914Z INFO ExtHandler ExtHandler Python: 3.9.16 Apr 12 18:29:37.898187 waagent[1496]: 2024-04-12T18:29:37.898141Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Apr 12 18:29:37.907428 waagent[1496]: 2024-04-12T18:29:37.907287Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.3; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 12 18:29:37.908120 waagent[1496]: 2024-04-12T18:29:37.908050Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 12 18:29:37.908398 waagent[1496]: 2024-04-12T18:29:37.908347Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 12 18:29:37.925237 waagent[1496]: 2024-04-12T18:29:37.925121Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 12 18:29:37.935051 waagent[1496]: 2024-04-12T18:29:37.934982Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.149 Apr 12 18:29:37.936417 waagent[1496]: 2024-04-12T18:29:37.936351Z INFO ExtHandler Apr 12 18:29:37.936762 waagent[1496]: 2024-04-12T18:29:37.936704Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 325f6b6c-c999-4770-a4bc-47ddf4503a4e eTag: 13100350889210445476 source: Fabric] Apr 12 18:29:37.937779 waagent[1496]: 2024-04-12T18:29:37.937706Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 12 18:29:37.939278 waagent[1496]: 2024-04-12T18:29:37.939213Z INFO ExtHandler Apr 12 18:29:37.939529 waagent[1496]: 2024-04-12T18:29:37.939478Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 12 18:29:37.946821 waagent[1496]: 2024-04-12T18:29:37.946757Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 12 18:29:37.947566 waagent[1496]: 2024-04-12T18:29:37.947509Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Apr 12 18:29:37.967476 waagent[1496]: 2024-04-12T18:29:37.967397Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Apr 12 18:29:38.044435 waagent[1496]: 2024-04-12T18:29:38.044270Z INFO ExtHandler Downloaded certificate {'thumbprint': '61940AF45AAC499280C07AA9535BB27C0F535DDE', 'hasPrivateKey': True} Apr 12 18:29:38.046044 waagent[1496]: 2024-04-12T18:29:38.045964Z INFO ExtHandler Downloaded certificate {'thumbprint': '12D254CED1C52542DF9F5318FA349DD83C6C2EF1', 'hasPrivateKey': False} Apr 12 18:29:38.047505 waagent[1496]: 2024-04-12T18:29:38.047427Z INFO ExtHandler Fetch goal state completed Apr 12 18:29:38.073710 waagent[1496]: 2024-04-12T18:29:38.073467Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.7 1 Nov 2022 (Library: OpenSSL 3.0.7 1 Nov 2022) Apr 12 18:29:38.090301 waagent[1496]: 2024-04-12T18:29:38.090177Z INFO ExtHandler ExtHandler WALinuxAgent-2.10.0.8 running as process 1496 Apr 12 18:29:38.094738 waagent[1496]: 2024-04-12T18:29:38.094640Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.3', '', 'Flatcar Container Linux by Kinvolk'] Apr 12 18:29:38.096602 waagent[1496]: 2024-04-12T18:29:38.096498Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 12 18:29:38.103064 waagent[1496]: 2024-04-12T18:29:38.103001Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 12 18:29:38.103756 waagent[1496]: 2024-04-12T18:29:38.103687Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 12 18:29:38.113149 waagent[1496]: 2024-04-12T18:29:38.113081Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 12 18:29:38.113981 waagent[1496]: 2024-04-12T18:29:38.113901Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Apr 12 18:29:38.121994 waagent[1496]: 2024-04-12T18:29:38.121857Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 12 18:29:38.123405 waagent[1496]: 2024-04-12T18:29:38.123309Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 12 18:29:38.125449 waagent[1496]: 2024-04-12T18:29:38.125362Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 12 18:29:38.125764 waagent[1496]: 2024-04-12T18:29:38.125684Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 12 18:29:38.126291 waagent[1496]: 2024-04-12T18:29:38.126221Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 12 18:29:38.126967 waagent[1496]: 2024-04-12T18:29:38.126899Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 12 18:29:38.127307 waagent[1496]: 2024-04-12T18:29:38.127246Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 12 18:29:38.127307 waagent[1496]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 12 18:29:38.127307 waagent[1496]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Apr 12 18:29:38.127307 waagent[1496]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 12 18:29:38.127307 waagent[1496]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 12 18:29:38.127307 waagent[1496]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 12 18:29:38.127307 waagent[1496]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 12 18:29:38.129987 waagent[1496]: 2024-04-12T18:29:38.129841Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 12 18:29:38.130748 waagent[1496]: 2024-04-12T18:29:38.130662Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 12 18:29:38.131356 waagent[1496]: 2024-04-12T18:29:38.131288Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 12 18:29:38.132646 waagent[1496]: 2024-04-12T18:29:38.131838Z INFO EnvHandler ExtHandler Configure routes Apr 12 18:29:38.134451 waagent[1496]: 2024-04-12T18:29:38.134280Z INFO EnvHandler ExtHandler Gateway:None Apr 12 18:29:38.134875 waagent[1496]: 2024-04-12T18:29:38.134789Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 12 18:29:38.135129 waagent[1496]: 2024-04-12T18:29:38.135058Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 12 18:29:38.136292 waagent[1496]: 2024-04-12T18:29:38.136226Z INFO EnvHandler ExtHandler Routes:None Apr 12 18:29:38.138543 waagent[1496]: 2024-04-12T18:29:38.137127Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 12 18:29:38.140150 waagent[1496]: 2024-04-12T18:29:38.139970Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 12 18:29:38.140524 waagent[1496]: 2024-04-12T18:29:38.140449Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 12 18:29:38.148469 waagent[1496]: 2024-04-12T18:29:38.148304Z INFO MonitorHandler ExtHandler Network interfaces: Apr 12 18:29:38.148469 waagent[1496]: Executing ['ip', '-a', '-o', 'link']: Apr 12 18:29:38.148469 waagent[1496]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 12 18:29:38.148469 waagent[1496]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bc:42:bb brd ff:ff:ff:ff:ff:ff Apr 12 18:29:38.148469 waagent[1496]: 3: enP22940s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bc:42:bb brd ff:ff:ff:ff:ff:ff\ altname enP22940p0s2 Apr 12 18:29:38.148469 waagent[1496]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 12 18:29:38.148469 waagent[1496]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 12 18:29:38.148469 waagent[1496]: 2: eth0 inet 10.200.20.17/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 12 18:29:38.148469 waagent[1496]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 12 18:29:38.148469 waagent[1496]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Apr 12 18:29:38.148469 waagent[1496]: 2: eth0 inet6 fe80::222:48ff:febc:42bb/64 scope link \ valid_lft forever preferred_lft forever Apr 12 18:29:38.160091 waagent[1496]: 2024-04-12T18:29:38.159981Z INFO ExtHandler ExtHandler Downloading agent manifest Apr 12 18:29:38.177720 waagent[1496]: 2024-04-12T18:29:38.177616Z INFO ExtHandler ExtHandler Apr 12 18:29:38.179123 waagent[1496]: 2024-04-12T18:29:38.179041Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 11c9cca6-1999-4cf0-b875-f2686751af08 correlation 6cab7097-bb86-4d72-bee6-886d20fa87bb created: 2024-04-12T18:28:06.737930Z] Apr 12 18:29:38.181853 waagent[1496]: 2024-04-12T18:29:38.181753Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 12 18:29:38.187197 waagent[1496]: 2024-04-12T18:29:38.187102Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 9 ms] Apr 12 18:29:38.215706 waagent[1496]: 2024-04-12T18:29:38.215604Z INFO ExtHandler ExtHandler Looking for existing remote access users. Apr 12 18:29:38.235421 waagent[1496]: 2024-04-12T18:29:38.235335Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.10.0.8 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D34615AA-AA7C-418F-948A-80E1A61FAE8E;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Apr 12 18:29:38.359015 waagent[1496]: 2024-04-12T18:29:38.358847Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 12 18:29:38.359015 waagent[1496]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 12 18:29:38.359015 waagent[1496]: pkts bytes target prot opt in out source destination Apr 12 18:29:38.359015 waagent[1496]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 12 18:29:38.359015 waagent[1496]: pkts bytes target prot opt in out source destination Apr 12 18:29:38.359015 waagent[1496]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 12 18:29:38.359015 waagent[1496]: pkts bytes target prot opt in out source destination Apr 12 18:29:38.359015 waagent[1496]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 12 18:29:38.359015 waagent[1496]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 12 18:29:38.359015 waagent[1496]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 12 18:29:38.369012 waagent[1496]: 2024-04-12T18:29:38.368850Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 12 18:29:38.369012 waagent[1496]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 12 18:29:38.369012 waagent[1496]: pkts bytes target prot opt in out source destination Apr 12 18:29:38.369012 waagent[1496]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 12 18:29:38.369012 waagent[1496]: pkts bytes target prot opt in out source destination Apr 12 18:29:38.369012 waagent[1496]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 12 18:29:38.369012 waagent[1496]: pkts bytes target prot opt in out source destination Apr 12 18:29:38.369012 waagent[1496]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 12 18:29:38.369012 waagent[1496]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 12 18:29:38.369012 waagent[1496]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 12 18:29:38.370031 waagent[1496]: 2024-04-12T18:29:38.369974Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 12 18:30:03.838561 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Apr 12 18:30:10.659118 update_engine[1300]: I0412 18:30:10.659044 1300 update_attempter.cc:509] Updating boot flags... Apr 12 18:30:16.175781 systemd[1]: Created slice system-sshd.slice. Apr 12 18:30:16.177017 systemd[1]: Started sshd@0-10.200.20.17:22-10.200.12.6:42270.service. Apr 12 18:30:16.785401 sshd[1617]: Accepted publickey for core from 10.200.12.6 port 42270 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:30:16.801125 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:30:16.805480 systemd-logind[1298]: New session 3 of user core. Apr 12 18:30:16.806385 systemd[1]: Started session-3.scope. Apr 12 18:30:17.180871 systemd[1]: Started sshd@1-10.200.20.17:22-10.200.12.6:42280.service. Apr 12 18:30:17.589987 sshd[1622]: Accepted publickey for core from 10.200.12.6 port 42280 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:30:17.591356 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:30:17.595539 systemd-logind[1298]: New session 4 of user core. Apr 12 18:30:17.596050 systemd[1]: Started session-4.scope. Apr 12 18:30:17.897889 sshd[1622]: pam_unix(sshd:session): session closed for user core Apr 12 18:30:17.901172 systemd[1]: sshd@1-10.200.20.17:22-10.200.12.6:42280.service: Deactivated successfully. Apr 12 18:30:17.901947 systemd[1]: session-4.scope: Deactivated successfully. Apr 12 18:30:17.902523 systemd-logind[1298]: Session 4 logged out. Waiting for processes to exit. Apr 12 18:30:17.903633 systemd-logind[1298]: Removed session 4. Apr 12 18:30:17.967389 systemd[1]: Started sshd@2-10.200.20.17:22-10.200.12.6:42288.service. Apr 12 18:30:18.378310 sshd[1628]: Accepted publickey for core from 10.200.12.6 port 42288 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:30:18.380087 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:30:18.384823 systemd[1]: Started session-5.scope. Apr 12 18:30:18.385657 systemd-logind[1298]: New session 5 of user core. Apr 12 18:30:18.706390 sshd[1628]: pam_unix(sshd:session): session closed for user core Apr 12 18:30:18.709458 systemd[1]: sshd@2-10.200.20.17:22-10.200.12.6:42288.service: Deactivated successfully. Apr 12 18:30:18.710195 systemd[1]: session-5.scope: Deactivated successfully. Apr 12 18:30:18.710746 systemd-logind[1298]: Session 5 logged out. Waiting for processes to exit. Apr 12 18:30:18.711453 systemd-logind[1298]: Removed session 5. Apr 12 18:30:18.776878 systemd[1]: Started sshd@3-10.200.20.17:22-10.200.12.6:42298.service. Apr 12 18:30:19.192996 sshd[1634]: Accepted publickey for core from 10.200.12.6 port 42298 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:30:19.194328 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:30:19.198601 systemd-logind[1298]: New session 6 of user core. Apr 12 18:30:19.199080 systemd[1]: Started session-6.scope. Apr 12 18:30:19.529261 sshd[1634]: pam_unix(sshd:session): session closed for user core Apr 12 18:30:19.532407 systemd[1]: sshd@3-10.200.20.17:22-10.200.12.6:42298.service: Deactivated successfully. Apr 12 18:30:19.533141 systemd[1]: session-6.scope: Deactivated successfully. Apr 12 18:30:19.533727 systemd-logind[1298]: Session 6 logged out. Waiting for processes to exit. Apr 12 18:30:19.534506 systemd-logind[1298]: Removed session 6. Apr 12 18:30:19.600890 systemd[1]: Started sshd@4-10.200.20.17:22-10.200.12.6:42312.service. Apr 12 18:30:20.017332 sshd[1640]: Accepted publickey for core from 10.200.12.6 port 42312 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:30:20.018699 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:30:20.023669 systemd[1]: Started session-7.scope. Apr 12 18:30:20.024656 systemd-logind[1298]: New session 7 of user core. Apr 12 18:30:20.533296 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 12 18:30:20.533963 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Apr 12 18:30:21.214434 systemd[1]: Starting docker.service... Apr 12 18:30:21.261452 env[1658]: time="2024-04-12T18:30:21.261390207Z" level=info msg="Starting up" Apr 12 18:30:21.262933 env[1658]: time="2024-04-12T18:30:21.262900569Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:30:21.263056 env[1658]: time="2024-04-12T18:30:21.263041809Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:30:21.263128 env[1658]: time="2024-04-12T18:30:21.263111330Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:30:21.263182 env[1658]: time="2024-04-12T18:30:21.263169370Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:30:21.265419 env[1658]: time="2024-04-12T18:30:21.265391452Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:30:21.265544 env[1658]: time="2024-04-12T18:30:21.265529333Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:30:21.265661 env[1658]: time="2024-04-12T18:30:21.265639933Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:30:21.265727 env[1658]: time="2024-04-12T18:30:21.265713893Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:30:21.270686 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport358266237-merged.mount: Deactivated successfully. Apr 12 18:30:21.378767 env[1658]: time="2024-04-12T18:30:21.378280353Z" level=info msg="Loading containers: start." Apr 12 18:30:21.537597 kernel: Initializing XFRM netlink socket Apr 12 18:30:21.569754 env[1658]: time="2024-04-12T18:30:21.569707391Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 12 18:30:21.676470 systemd-networkd[1458]: docker0: Link UP Apr 12 18:30:21.692992 env[1658]: time="2024-04-12T18:30:21.692942345Z" level=info msg="Loading containers: done." Apr 12 18:30:21.702256 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck196837290-merged.mount: Deactivated successfully. Apr 12 18:30:21.714875 env[1658]: time="2024-04-12T18:30:21.714824492Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 12 18:30:21.715069 env[1658]: time="2024-04-12T18:30:21.715041213Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Apr 12 18:30:21.715190 env[1658]: time="2024-04-12T18:30:21.715165653Z" level=info msg="Daemon has completed initialization" Apr 12 18:30:21.739823 systemd[1]: Started docker.service. Apr 12 18:30:21.747288 env[1658]: time="2024-04-12T18:30:21.747209293Z" level=info msg="API listen on /run/docker.sock" Apr 12 18:30:21.764402 systemd[1]: Reloading. Apr 12 18:30:21.818432 /usr/lib/systemd/system-generators/torcx-generator[1788]: time="2024-04-12T18:30:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:30:21.818466 /usr/lib/systemd/system-generators/torcx-generator[1788]: time="2024-04-12T18:30:21Z" level=info msg="torcx already run" Apr 12 18:30:21.910166 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:30:21.910188 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:30:21.925653 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:30:22.007398 systemd[1]: Started kubelet.service. Apr 12 18:30:22.076237 kubelet[1847]: E0412 18:30:22.075911 1847 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Apr 12 18:30:22.080155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:30:22.080277 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:30:25.832504 env[1311]: time="2024-04-12T18:30:25.832456803Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.12\"" Apr 12 18:30:26.652567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount463117343.mount: Deactivated successfully. Apr 12 18:30:28.508523 env[1311]: time="2024-04-12T18:30:28.508473508Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:28.516139 env[1311]: time="2024-04-12T18:30:28.516098524Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d4d4d261fc80c6c87ea30cb7d2b1a53b684be80fb7af5e16a2c97371e669f19f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:28.519412 env[1311]: time="2024-04-12T18:30:28.519376850Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:28.523484 env[1311]: time="2024-04-12T18:30:28.523430659Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cf0c29f585316888225cf254949988bdbedc7ba6238bc9a24bf6f0c508c42b6c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:28.524370 env[1311]: time="2024-04-12T18:30:28.524335580Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.12\" returns image reference \"sha256:d4d4d261fc80c6c87ea30cb7d2b1a53b684be80fb7af5e16a2c97371e669f19f\"" Apr 12 18:30:28.533481 env[1311]: time="2024-04-12T18:30:28.533433639Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.12\"" Apr 12 18:30:30.298408 env[1311]: time="2024-04-12T18:30:30.298353367Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:30.303671 env[1311]: time="2024-04-12T18:30:30.303629337Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a4a7509f59f7f027d7c434948b3b8e5463b835d28675c76c6d1ff21d2c4e8f18,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:30.308126 env[1311]: time="2024-04-12T18:30:30.308083466Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:30.311991 env[1311]: time="2024-04-12T18:30:30.311823673Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6caa3a4278e87169371d031861e49db21742bcbd8df650d7fe519a1a7f6764af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:30.312741 env[1311]: time="2024-04-12T18:30:30.312710875Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.12\" returns image reference \"sha256:a4a7509f59f7f027d7c434948b3b8e5463b835d28675c76c6d1ff21d2c4e8f18\"" Apr 12 18:30:30.322257 env[1311]: time="2024-04-12T18:30:30.322221293Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.12\"" Apr 12 18:30:31.413285 env[1311]: time="2024-04-12T18:30:31.413237884Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:31.418522 env[1311]: time="2024-04-12T18:30:31.418476734Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5de6108d9220f19bcc35bf81a2879e5ff2f6506c08af260c116b803579db675b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:31.421664 env[1311]: time="2024-04-12T18:30:31.421629739Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:31.426240 env[1311]: time="2024-04-12T18:30:31.426200108Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:b8bb7b17a4f915419575ceb885e128d0bb5ea8e67cb88dbde257988b770a4dce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:31.427218 env[1311]: time="2024-04-12T18:30:31.427184310Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.12\" returns image reference \"sha256:5de6108d9220f19bcc35bf81a2879e5ff2f6506c08af260c116b803579db675b\"" Apr 12 18:30:31.436705 env[1311]: time="2024-04-12T18:30:31.436671007Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.12\"" Apr 12 18:30:32.202840 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 12 18:30:32.202974 systemd[1]: Stopped kubelet.service. Apr 12 18:30:32.204475 systemd[1]: Started kubelet.service. Apr 12 18:30:32.266125 kubelet[1880]: E0412 18:30:32.266053 1880 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Apr 12 18:30:32.268784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:30:32.268914 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:30:32.543016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1937900480.mount: Deactivated successfully. Apr 12 18:30:33.418912 env[1311]: time="2024-04-12T18:30:33.418858334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:33.426556 env[1311]: time="2024-04-12T18:30:33.426479228Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7daec180765068529c26cc4c7c989513bebbe614cbbc58beebe1db17ae177e06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:33.430609 env[1311]: time="2024-04-12T18:30:33.430524195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:33.433558 env[1311]: time="2024-04-12T18:30:33.433499840Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:b0539f35b586abc54ca7660f9bb8a539d010b9e07d20e9e3d529cf0ca35d4ddf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:33.434311 env[1311]: time="2024-04-12T18:30:33.434249441Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.12\" returns image reference \"sha256:7daec180765068529c26cc4c7c989513bebbe614cbbc58beebe1db17ae177e06\"" Apr 12 18:30:33.446756 env[1311]: time="2024-04-12T18:30:33.446708983Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 12 18:30:34.029157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3116647139.mount: Deactivated successfully. Apr 12 18:30:34.055136 env[1311]: time="2024-04-12T18:30:34.055067696Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:34.061991 env[1311]: time="2024-04-12T18:30:34.061937427Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:34.066278 env[1311]: time="2024-04-12T18:30:34.066236275Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:34.070746 env[1311]: time="2024-04-12T18:30:34.070702962Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:34.071321 env[1311]: time="2024-04-12T18:30:34.071291164Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 12 18:30:34.081531 env[1311]: time="2024-04-12T18:30:34.081486461Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Apr 12 18:30:35.008188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1115257466.mount: Deactivated successfully. Apr 12 18:30:37.973805 env[1311]: time="2024-04-12T18:30:37.973740050Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:37.981088 env[1311]: time="2024-04-12T18:30:37.981042062Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:37.985754 env[1311]: time="2024-04-12T18:30:37.985700989Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:37.991080 env[1311]: time="2024-04-12T18:30:37.991036157Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:37.991772 env[1311]: time="2024-04-12T18:30:37.991742159Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737\"" Apr 12 18:30:38.002008 env[1311]: time="2024-04-12T18:30:38.001969775Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Apr 12 18:30:38.644603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2232931089.mount: Deactivated successfully. Apr 12 18:30:39.138994 env[1311]: time="2024-04-12T18:30:39.138933006Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:39.146024 env[1311]: time="2024-04-12T18:30:39.145971777Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:39.151036 env[1311]: time="2024-04-12T18:30:39.150991784Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:39.154907 env[1311]: time="2024-04-12T18:30:39.154866230Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:39.155385 env[1311]: time="2024-04-12T18:30:39.155347751Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Apr 12 18:30:42.285907 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 12 18:30:42.286087 systemd[1]: Stopped kubelet.service. Apr 12 18:30:42.287601 systemd[1]: Started kubelet.service. Apr 12 18:30:42.345538 kubelet[1964]: E0412 18:30:42.345483 1964 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Apr 12 18:30:42.347267 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:30:42.347415 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:30:44.412144 systemd[1]: Stopped kubelet.service. Apr 12 18:30:44.436783 systemd[1]: Reloading. Apr 12 18:30:44.518295 /usr/lib/systemd/system-generators/torcx-generator[1993]: time="2024-04-12T18:30:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:30:44.518861 /usr/lib/systemd/system-generators/torcx-generator[1993]: time="2024-04-12T18:30:44Z" level=info msg="torcx already run" Apr 12 18:30:44.620523 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:30:44.620548 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:30:44.636475 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:30:44.730841 systemd[1]: Started kubelet.service. Apr 12 18:30:44.792176 kubelet[2054]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:30:44.792176 kubelet[2054]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:30:44.792176 kubelet[2054]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:30:44.792556 kubelet[2054]: I0412 18:30:44.792283 2054 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:30:45.697695 kubelet[2054]: I0412 18:30:45.697660 2054 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Apr 12 18:30:45.697861 kubelet[2054]: I0412 18:30:45.697850 2054 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:30:45.698165 kubelet[2054]: I0412 18:30:45.698136 2054 server.go:837] "Client rotation is on, will bootstrap in background" Apr 12 18:30:45.702493 kubelet[2054]: I0412 18:30:45.702442 2054 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:30:45.702709 kubelet[2054]: E0412 18:30:45.702687 2054 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.17:6443: connect: connection refused Apr 12 18:30:45.704119 kubelet[2054]: W0412 18:30:45.704099 2054 machine.go:65] Cannot read vendor id correctly, set empty. Apr 12 18:30:45.704806 kubelet[2054]: I0412 18:30:45.704788 2054 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:30:45.705153 kubelet[2054]: I0412 18:30:45.705136 2054 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:30:45.705304 kubelet[2054]: I0412 18:30:45.705283 2054 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Apr 12 18:30:45.705445 kubelet[2054]: I0412 18:30:45.705432 2054 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Apr 12 18:30:45.705506 kubelet[2054]: I0412 18:30:45.705497 2054 container_manager_linux.go:302] "Creating device plugin manager" Apr 12 18:30:45.705737 kubelet[2054]: I0412 18:30:45.705718 2054 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:30:45.711926 kubelet[2054]: I0412 18:30:45.711894 2054 kubelet.go:405] "Attempting to sync node with API server" Apr 12 18:30:45.711926 kubelet[2054]: I0412 18:30:45.711925 2054 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:30:45.712092 kubelet[2054]: I0412 18:30:45.711951 2054 kubelet.go:309] "Adding apiserver pod source" Apr 12 18:30:45.712092 kubelet[2054]: I0412 18:30:45.711965 2054 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:30:45.712745 kubelet[2054]: W0412 18:30:45.712684 2054 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.3-a-63b2983992&limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 12 18:30:45.712832 kubelet[2054]: E0412 18:30:45.712757 2054 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.3-a-63b2983992&limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 12 18:30:45.712888 kubelet[2054]: I0412 18:30:45.712865 2054 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:30:45.713204 kubelet[2054]: W0412 18:30:45.713176 2054 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 12 18:30:45.713646 kubelet[2054]: I0412 18:30:45.713626 2054 server.go:1168] "Started kubelet" Apr 12 18:30:45.717498 kubelet[2054]: W0412 18:30:45.717446 2054 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 12 18:30:45.717691 kubelet[2054]: E0412 18:30:45.717677 2054 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 12 18:30:45.718028 kubelet[2054]: E0412 18:30:45.717913 2054 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.3-a-63b2983992.17c59be610f4c574", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.3-a-63b2983992", UID:"ci-3510.3.3-a-63b2983992", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.3-a-63b2983992"}, FirstTimestamp:time.Date(2024, time.April, 12, 18, 30, 45, 713601908, time.Local), LastTimestamp:time.Date(2024, time.April, 12, 18, 30, 45, 713601908, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.20.17:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.17:6443: connect: connection refused'(may retry after sleeping) Apr 12 18:30:45.718290 kubelet[2054]: I0412 18:30:45.718275 2054 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Apr 12 18:30:45.718725 kubelet[2054]: I0412 18:30:45.718708 2054 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:30:45.719467 kubelet[2054]: I0412 18:30:45.719447 2054 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:30:45.722226 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Apr 12 18:30:45.722937 kubelet[2054]: E0412 18:30:45.722917 2054 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 12 18:30:45.723055 kubelet[2054]: I0412 18:30:45.723020 2054 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:30:45.723123 kubelet[2054]: E0412 18:30:45.723110 2054 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:30:45.725289 kubelet[2054]: I0412 18:30:45.725268 2054 volume_manager.go:284] "Starting Kubelet Volume Manager" Apr 12 18:30:45.725553 kubelet[2054]: I0412 18:30:45.725535 2054 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Apr 12 18:30:45.726080 kubelet[2054]: W0412 18:30:45.726036 2054 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 12 18:30:45.726209 kubelet[2054]: E0412 18:30:45.726188 2054 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 12 18:30:45.727208 kubelet[2054]: E0412 18:30:45.727184 2054 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-63b2983992?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="200ms" Apr 12 18:30:45.785081 kubelet[2054]: I0412 18:30:45.785051 2054 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Apr 12 18:30:45.786375 kubelet[2054]: I0412 18:30:45.786343 2054 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Apr 12 18:30:45.786592 kubelet[2054]: I0412 18:30:45.786562 2054 status_manager.go:207] "Starting to sync pod status with apiserver" Apr 12 18:30:45.786694 kubelet[2054]: I0412 18:30:45.786679 2054 kubelet.go:2257] "Starting kubelet main sync loop" Apr 12 18:30:45.786836 kubelet[2054]: E0412 18:30:45.786816 2054 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:30:45.787494 kubelet[2054]: W0412 18:30:45.787435 2054 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 12 18:30:45.787723 kubelet[2054]: E0412 18:30:45.787708 2054 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 12 18:30:45.866604 kubelet[2054]: I0412 18:30:45.866544 2054 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.3-a-63b2983992" Apr 12 18:30:45.867531 kubelet[2054]: E0412 18:30:45.867498 2054 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-3510.3.3-a-63b2983992" Apr 12 18:30:45.867745 kubelet[2054]: I0412 18:30:45.867727 2054 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:30:45.867800 kubelet[2054]: I0412 18:30:45.867764 2054 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:30:45.867800 kubelet[2054]: I0412 18:30:45.867790 2054 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:30:45.872133 kubelet[2054]: I0412 18:30:45.872102 2054 policy_none.go:49] "None policy: Start" Apr 12 18:30:45.873062 kubelet[2054]: I0412 18:30:45.873035 2054 memory_manager.go:169] "Starting memorymanager" policy="None" Apr 12 18:30:45.873150 kubelet[2054]: I0412 18:30:45.873074 2054 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:30:45.881016 systemd[1]: Created slice kubepods.slice. Apr 12 18:30:45.885980 systemd[1]: Created slice kubepods-burstable.slice. Apr 12 18:30:45.886962 kubelet[2054]: E0412 18:30:45.886931 2054 kubelet.go:2281] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 12 18:30:45.890235 systemd[1]: Created slice kubepods-besteffort.slice. Apr 12 18:30:45.897448 kubelet[2054]: I0412 18:30:45.897418 2054 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:30:45.897899 kubelet[2054]: I0412 18:30:45.897880 2054 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:30:45.901474 kubelet[2054]: E0412 18:30:45.901439 2054 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.3-a-63b2983992\" not found" Apr 12 18:30:45.928302 kubelet[2054]: E0412 18:30:45.928269 2054 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-63b2983992?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="400ms" Apr 12 18:30:46.069488 kubelet[2054]: I0412 18:30:46.069463 2054 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.3-a-63b2983992" Apr 12 18:30:46.069987 kubelet[2054]: E0412 18:30:46.069966 2054 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-3510.3.3-a-63b2983992" Apr 12 18:30:46.088176 kubelet[2054]: I0412 18:30:46.088146 2054 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:30:46.089725 kubelet[2054]: I0412 18:30:46.089694 2054 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:30:46.092735 kubelet[2054]: I0412 18:30:46.092707 2054 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:30:46.095815 systemd[1]: Created slice kubepods-burstable-pod071c2731125a0f6248128f48edc3a4f8.slice. Apr 12 18:30:46.103182 systemd[1]: Created slice kubepods-burstable-pod32cf39eb5344122991b7bba21140e9d4.slice. Apr 12 18:30:46.115909 systemd[1]: Created slice kubepods-burstable-pod7f4b4c63f50919a6d8c105f0faae2ca9.slice. Apr 12 18:30:46.126935 kubelet[2054]: I0412 18:30:46.126893 2054 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/071c2731125a0f6248128f48edc3a4f8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.3-a-63b2983992\" (UID: \"071c2731125a0f6248128f48edc3a4f8\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-63b2983992" Apr 12 18:30:46.126935 kubelet[2054]: I0412 18:30:46.126936 2054 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32cf39eb5344122991b7bba21140e9d4-ca-certs\") pod \"kube-controller-manager-ci-3510.3.3-a-63b2983992\" (UID: \"32cf39eb5344122991b7bba21140e9d4\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-63b2983992" Apr 12 18:30:46.127093 kubelet[2054]: I0412 18:30:46.126957 2054 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/32cf39eb5344122991b7bba21140e9d4-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.3-a-63b2983992\" (UID: \"32cf39eb5344122991b7bba21140e9d4\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-63b2983992" Apr 12 18:30:46.127093 kubelet[2054]: I0412 18:30:46.126981 2054 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32cf39eb5344122991b7bba21140e9d4-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.3-a-63b2983992\" (UID: \"32cf39eb5344122991b7bba21140e9d4\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-63b2983992" Apr 12 18:30:46.127093 kubelet[2054]: I0412 18:30:46.127003 2054 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32cf39eb5344122991b7bba21140e9d4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.3-a-63b2983992\" (UID: \"32cf39eb5344122991b7bba21140e9d4\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-63b2983992" Apr 12 18:30:46.127093 kubelet[2054]: I0412 18:30:46.127027 2054 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f4b4c63f50919a6d8c105f0faae2ca9-kubeconfig\") pod \"kube-scheduler-ci-3510.3.3-a-63b2983992\" (UID: \"7f4b4c63f50919a6d8c105f0faae2ca9\") " pod="kube-system/kube-scheduler-ci-3510.3.3-a-63b2983992" Apr 12 18:30:46.127093 kubelet[2054]: I0412 18:30:46.127047 2054 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/071c2731125a0f6248128f48edc3a4f8-ca-certs\") pod \"kube-apiserver-ci-3510.3.3-a-63b2983992\" (UID: \"071c2731125a0f6248128f48edc3a4f8\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-63b2983992" Apr 12 18:30:46.127214 kubelet[2054]: I0412 18:30:46.127068 2054 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/071c2731125a0f6248128f48edc3a4f8-k8s-certs\") pod \"kube-apiserver-ci-3510.3.3-a-63b2983992\" (UID: \"071c2731125a0f6248128f48edc3a4f8\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-63b2983992" Apr 12 18:30:46.127214 kubelet[2054]: I0412 18:30:46.127087 2054 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32cf39eb5344122991b7bba21140e9d4-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.3-a-63b2983992\" (UID: \"32cf39eb5344122991b7bba21140e9d4\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-63b2983992" Apr 12 18:30:46.329248 kubelet[2054]: E0412 18:30:46.329142 2054 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-63b2983992?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="800ms" Apr 12 18:30:46.402159 env[1311]: time="2024-04-12T18:30:46.402100864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.3-a-63b2983992,Uid:071c2731125a0f6248128f48edc3a4f8,Namespace:kube-system,Attempt:0,}" Apr 12 18:30:46.406231 env[1311]: time="2024-04-12T18:30:46.406185429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.3-a-63b2983992,Uid:32cf39eb5344122991b7bba21140e9d4,Namespace:kube-system,Attempt:0,}" Apr 12 18:30:46.420049 env[1311]: time="2024-04-12T18:30:46.419994286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.3-a-63b2983992,Uid:7f4b4c63f50919a6d8c105f0faae2ca9,Namespace:kube-system,Attempt:0,}" Apr 12 18:30:46.472469 kubelet[2054]: I0412 18:30:46.472410 2054 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.3-a-63b2983992" Apr 12 18:30:46.472894 kubelet[2054]: E0412 18:30:46.472876 2054 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-3510.3.3-a-63b2983992" Apr 12 18:30:46.686463 kubelet[2054]: W0412 18:30:46.686272 2054 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.3-a-63b2983992&limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 12 18:30:46.686463 kubelet[2054]: E0412 18:30:46.686365 2054 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.3-a-63b2983992&limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 12 18:30:47.038528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3374265806.mount: Deactivated successfully. Apr 12 18:30:47.066328 env[1311]: time="2024-04-12T18:30:47.066279457Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:47.069398 env[1311]: time="2024-04-12T18:30:47.069353700Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:47.071527 kubelet[2054]: W0412 18:30:47.071421 2054 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 12 18:30:47.071527 kubelet[2054]: E0412 18:30:47.071501 2054 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 12 18:30:47.076403 env[1311]: time="2024-04-12T18:30:47.076358069Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:47.082500 env[1311]: time="2024-04-12T18:30:47.082447036Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:47.092091 env[1311]: time="2024-04-12T18:30:47.092036248Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:47.094675 env[1311]: time="2024-04-12T18:30:47.094619331Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:47.096797 env[1311]: time="2024-04-12T18:30:47.096764094Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:47.099946 env[1311]: time="2024-04-12T18:30:47.099900778Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:47.102786 env[1311]: time="2024-04-12T18:30:47.102740501Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:47.105004 env[1311]: time="2024-04-12T18:30:47.104955504Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:47.110396 env[1311]: time="2024-04-12T18:30:47.110345711Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:47.112874 env[1311]: time="2024-04-12T18:30:47.112827594Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:30:47.129999 kubelet[2054]: E0412 18:30:47.129954 2054 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-63b2983992?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="1.6s" Apr 12 18:30:47.129999 kubelet[2054]: W0412 18:30:47.129942 2054 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 12 18:30:47.129999 kubelet[2054]: E0412 18:30:47.130005 2054 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 12 18:30:47.175531 env[1311]: time="2024-04-12T18:30:47.174642549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:30:47.175531 env[1311]: time="2024-04-12T18:30:47.174685709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:30:47.175531 env[1311]: time="2024-04-12T18:30:47.174695709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:30:47.175531 env[1311]: time="2024-04-12T18:30:47.175003390Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f92e50a2b6eb8c75f0e1d99d96b75c592cf24e4c926f263b0e0573d97edd44e pid=2093 runtime=io.containerd.runc.v2 Apr 12 18:30:47.178760 kubelet[2054]: W0412 18:30:47.178671 2054 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 12 18:30:47.178760 kubelet[2054]: E0412 18:30:47.178736 2054 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.17:6443: connect: connection refused Apr 12 18:30:47.181455 env[1311]: time="2024-04-12T18:30:47.180796957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:30:47.181455 env[1311]: time="2024-04-12T18:30:47.180847197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:30:47.181455 env[1311]: time="2024-04-12T18:30:47.180858477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:30:47.181605 env[1311]: time="2024-04-12T18:30:47.181097117Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f2614d7beeb31e45f5ec9bbdeb93732b08a34b623e1f1952d9be4c6d0c9bdc3 pid=2109 runtime=io.containerd.runc.v2 Apr 12 18:30:47.197738 systemd[1]: Started cri-containerd-4f92e50a2b6eb8c75f0e1d99d96b75c592cf24e4c926f263b0e0573d97edd44e.scope. Apr 12 18:30:47.209886 systemd[1]: Started cri-containerd-0f2614d7beeb31e45f5ec9bbdeb93732b08a34b623e1f1952d9be4c6d0c9bdc3.scope. Apr 12 18:30:47.228165 env[1311]: time="2024-04-12T18:30:47.223028809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:30:47.228165 env[1311]: time="2024-04-12T18:30:47.223099729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:30:47.228165 env[1311]: time="2024-04-12T18:30:47.223111969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:30:47.228165 env[1311]: time="2024-04-12T18:30:47.223520329Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/626d7a19a255b7d060eed91d2e988dfd27ed559ba08728e15461bf46cd373789 pid=2145 runtime=io.containerd.runc.v2 Apr 12 18:30:47.245526 systemd[1]: Started cri-containerd-626d7a19a255b7d060eed91d2e988dfd27ed559ba08728e15461bf46cd373789.scope. Apr 12 18:30:47.251266 env[1311]: time="2024-04-12T18:30:47.251219883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.3-a-63b2983992,Uid:071c2731125a0f6248128f48edc3a4f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f92e50a2b6eb8c75f0e1d99d96b75c592cf24e4c926f263b0e0573d97edd44e\"" Apr 12 18:30:47.260930 env[1311]: time="2024-04-12T18:30:47.260874175Z" level=info msg="CreateContainer within sandbox \"4f92e50a2b6eb8c75f0e1d99d96b75c592cf24e4c926f263b0e0573d97edd44e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 12 18:30:47.278122 kubelet[2054]: I0412 18:30:47.278086 2054 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.3-a-63b2983992" Apr 12 18:30:47.278481 kubelet[2054]: E0412 18:30:47.278458 2054 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-3510.3.3-a-63b2983992" Apr 12 18:30:47.286224 env[1311]: time="2024-04-12T18:30:47.286163246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.3-a-63b2983992,Uid:7f4b4c63f50919a6d8c105f0faae2ca9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f2614d7beeb31e45f5ec9bbdeb93732b08a34b623e1f1952d9be4c6d0c9bdc3\"" Apr 12 18:30:47.291477 env[1311]: time="2024-04-12T18:30:47.291332172Z" level=info msg="CreateContainer within sandbox \"0f2614d7beeb31e45f5ec9bbdeb93732b08a34b623e1f1952d9be4c6d0c9bdc3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 12 18:30:47.301687 env[1311]: time="2024-04-12T18:30:47.301623465Z" level=info msg="CreateContainer within sandbox \"4f92e50a2b6eb8c75f0e1d99d96b75c592cf24e4c926f263b0e0573d97edd44e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c8adeab5671eb2b04a4310b5265a8efbbdf182b3557474dff8063bfde99f1c7d\"" Apr 12 18:30:47.303046 env[1311]: time="2024-04-12T18:30:47.303009107Z" level=info msg="StartContainer for \"c8adeab5671eb2b04a4310b5265a8efbbdf182b3557474dff8063bfde99f1c7d\"" Apr 12 18:30:47.314155 env[1311]: time="2024-04-12T18:30:47.314085200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.3-a-63b2983992,Uid:32cf39eb5344122991b7bba21140e9d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"626d7a19a255b7d060eed91d2e988dfd27ed559ba08728e15461bf46cd373789\"" Apr 12 18:30:47.319102 env[1311]: time="2024-04-12T18:30:47.319044606Z" level=info msg="CreateContainer within sandbox \"626d7a19a255b7d060eed91d2e988dfd27ed559ba08728e15461bf46cd373789\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 12 18:30:47.327560 systemd[1]: Started cri-containerd-c8adeab5671eb2b04a4310b5265a8efbbdf182b3557474dff8063bfde99f1c7d.scope. Apr 12 18:30:47.348067 env[1311]: time="2024-04-12T18:30:47.348008882Z" level=info msg="CreateContainer within sandbox \"0f2614d7beeb31e45f5ec9bbdeb93732b08a34b623e1f1952d9be4c6d0c9bdc3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ba23c22423fc92b11155d76d9f04dcb552d4c7af3a2149af2cb1e62b449c41e7\"" Apr 12 18:30:47.348652 env[1311]: time="2024-04-12T18:30:47.348617923Z" level=info msg="StartContainer for \"ba23c22423fc92b11155d76d9f04dcb552d4c7af3a2149af2cb1e62b449c41e7\"" Apr 12 18:30:47.376422 systemd[1]: Started cri-containerd-ba23c22423fc92b11155d76d9f04dcb552d4c7af3a2149af2cb1e62b449c41e7.scope. Apr 12 18:30:47.387601 env[1311]: time="2024-04-12T18:30:47.385107607Z" level=info msg="CreateContainer within sandbox \"626d7a19a255b7d060eed91d2e988dfd27ed559ba08728e15461bf46cd373789\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9c296159c2db4c0452849acc9c5ed1ae3470d968b13b80f95192331423e2b940\"" Apr 12 18:30:47.387601 env[1311]: time="2024-04-12T18:30:47.385693048Z" level=info msg="StartContainer for \"9c296159c2db4c0452849acc9c5ed1ae3470d968b13b80f95192331423e2b940\"" Apr 12 18:30:47.395429 env[1311]: time="2024-04-12T18:30:47.395366580Z" level=info msg="StartContainer for \"c8adeab5671eb2b04a4310b5265a8efbbdf182b3557474dff8063bfde99f1c7d\" returns successfully" Apr 12 18:30:47.421397 systemd[1]: Started cri-containerd-9c296159c2db4c0452849acc9c5ed1ae3470d968b13b80f95192331423e2b940.scope. Apr 12 18:30:47.453334 env[1311]: time="2024-04-12T18:30:47.453235411Z" level=info msg="StartContainer for \"ba23c22423fc92b11155d76d9f04dcb552d4c7af3a2149af2cb1e62b449c41e7\" returns successfully" Apr 12 18:30:47.480628 env[1311]: time="2024-04-12T18:30:47.480548924Z" level=info msg="StartContainer for \"9c296159c2db4c0452849acc9c5ed1ae3470d968b13b80f95192331423e2b940\" returns successfully" Apr 12 18:30:48.880497 kubelet[2054]: I0412 18:30:48.880454 2054 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.3-a-63b2983992" Apr 12 18:30:50.000109 kubelet[2054]: I0412 18:30:50.000070 2054 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.3-a-63b2983992" Apr 12 18:30:50.715584 kubelet[2054]: I0412 18:30:50.715542 2054 apiserver.go:52] "Watching apiserver" Apr 12 18:30:50.725831 kubelet[2054]: I0412 18:30:50.725782 2054 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Apr 12 18:30:50.754128 kubelet[2054]: I0412 18:30:50.754082 2054 reconciler.go:41] "Reconciler: start to sync state" Apr 12 18:30:52.832458 systemd[1]: Reloading. Apr 12 18:30:52.925751 /usr/lib/systemd/system-generators/torcx-generator[2345]: time="2024-04-12T18:30:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:30:52.935675 /usr/lib/systemd/system-generators/torcx-generator[2345]: time="2024-04-12T18:30:52Z" level=info msg="torcx already run" Apr 12 18:30:52.993379 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:30:52.993402 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:30:53.011065 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:30:53.126739 systemd[1]: Stopping kubelet.service... Apr 12 18:30:53.127250 kubelet[2054]: I0412 18:30:53.127189 2054 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:30:53.148194 systemd[1]: kubelet.service: Deactivated successfully. Apr 12 18:30:53.148404 systemd[1]: Stopped kubelet.service. Apr 12 18:30:53.148458 systemd[1]: kubelet.service: Consumed 1.265s CPU time. Apr 12 18:30:53.150454 systemd[1]: Started kubelet.service. Apr 12 18:30:53.239675 kubelet[2403]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:30:53.240035 kubelet[2403]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:30:53.240088 kubelet[2403]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:30:53.240226 kubelet[2403]: I0412 18:30:53.240188 2403 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:30:53.247381 kubelet[2403]: I0412 18:30:53.247343 2403 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Apr 12 18:30:53.247381 kubelet[2403]: I0412 18:30:53.247374 2403 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:30:53.247647 kubelet[2403]: I0412 18:30:53.247626 2403 server.go:837] "Client rotation is on, will bootstrap in background" Apr 12 18:30:53.249240 kubelet[2403]: I0412 18:30:53.249207 2403 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 12 18:30:53.251670 kubelet[2403]: I0412 18:30:53.251569 2403 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:30:53.252489 kubelet[2403]: W0412 18:30:53.252469 2403 machine.go:65] Cannot read vendor id correctly, set empty. Apr 12 18:30:53.253416 kubelet[2403]: I0412 18:30:53.253189 2403 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:30:53.253416 kubelet[2403]: I0412 18:30:53.253374 2403 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:30:53.253544 kubelet[2403]: I0412 18:30:53.253439 2403 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Apr 12 18:30:53.253544 kubelet[2403]: I0412 18:30:53.253460 2403 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Apr 12 18:30:53.253544 kubelet[2403]: I0412 18:30:53.253471 2403 container_manager_linux.go:302] "Creating device plugin manager" Apr 12 18:30:53.253544 kubelet[2403]: I0412 18:30:53.253498 2403 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:30:53.256964 kubelet[2403]: I0412 18:30:53.256937 2403 kubelet.go:405] "Attempting to sync node with API server" Apr 12 18:30:53.256964 kubelet[2403]: I0412 18:30:53.256963 2403 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:30:53.257115 kubelet[2403]: I0412 18:30:53.256997 2403 kubelet.go:309] "Adding apiserver pod source" Apr 12 18:30:53.257115 kubelet[2403]: I0412 18:30:53.257021 2403 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:30:53.265922 kubelet[2403]: I0412 18:30:53.265897 2403 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:30:53.266609 kubelet[2403]: I0412 18:30:53.266557 2403 server.go:1168] "Started kubelet" Apr 12 18:30:53.272209 sudo[2415]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 12 18:30:53.272431 sudo[2415]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Apr 12 18:30:53.274874 kubelet[2403]: I0412 18:30:53.274846 2403 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:30:53.285042 kubelet[2403]: I0412 18:30:53.285014 2403 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:30:53.286060 kubelet[2403]: I0412 18:30:53.286034 2403 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:30:53.287292 kubelet[2403]: I0412 18:30:53.287266 2403 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Apr 12 18:30:53.290986 kubelet[2403]: I0412 18:30:53.290953 2403 volume_manager.go:284] "Starting Kubelet Volume Manager" Apr 12 18:30:53.294867 kubelet[2403]: I0412 18:30:53.294837 2403 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Apr 12 18:30:53.300817 kubelet[2403]: I0412 18:30:53.300783 2403 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Apr 12 18:30:53.302029 kubelet[2403]: I0412 18:30:53.302006 2403 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Apr 12 18:30:53.302172 kubelet[2403]: I0412 18:30:53.302161 2403 status_manager.go:207] "Starting to sync pod status with apiserver" Apr 12 18:30:53.302242 kubelet[2403]: I0412 18:30:53.302233 2403 kubelet.go:2257] "Starting kubelet main sync loop" Apr 12 18:30:53.302337 kubelet[2403]: E0412 18:30:53.302327 2403 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:30:53.307517 kubelet[2403]: E0412 18:30:53.307484 2403 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 12 18:30:53.307517 kubelet[2403]: E0412 18:30:53.307517 2403 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:30:53.404688 kubelet[2403]: E0412 18:30:53.403105 2403 kubelet.go:2281] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 12 18:30:53.405308 kubelet[2403]: I0412 18:30:53.405281 2403 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.3-a-63b2983992" Apr 12 18:30:53.446462 kubelet[2403]: I0412 18:30:53.446429 2403 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.3-a-63b2983992" Apr 12 18:30:53.446760 kubelet[2403]: I0412 18:30:53.446748 2403 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.3-a-63b2983992" Apr 12 18:30:53.483623 kubelet[2403]: I0412 18:30:53.482874 2403 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:30:53.483623 kubelet[2403]: I0412 18:30:53.482903 2403 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:30:53.483623 kubelet[2403]: I0412 18:30:53.482924 2403 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:30:53.483623 kubelet[2403]: I0412 18:30:53.483070 2403 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 12 18:30:53.483623 kubelet[2403]: I0412 18:30:53.483082 2403 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Apr 12 18:30:53.483623 kubelet[2403]: I0412 18:30:53.483089 2403 policy_none.go:49] "None policy: Start" Apr 12 18:30:53.483917 kubelet[2403]: I0412 18:30:53.483695 2403 memory_manager.go:169] "Starting memorymanager" policy="None" Apr 12 18:30:53.483917 kubelet[2403]: I0412 18:30:53.483720 2403 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:30:53.483917 kubelet[2403]: I0412 18:30:53.483875 2403 state_mem.go:75] "Updated machine memory state" Apr 12 18:30:53.492598 kubelet[2403]: I0412 18:30:53.492229 2403 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:30:53.494067 kubelet[2403]: I0412 18:30:53.493649 2403 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:30:53.603986 kubelet[2403]: I0412 18:30:53.603763 2403 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:30:53.603986 kubelet[2403]: I0412 18:30:53.603862 2403 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:30:53.603986 kubelet[2403]: I0412 18:30:53.603904 2403 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:30:53.616350 kubelet[2403]: W0412 18:30:53.615434 2403 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 12 18:30:53.618268 kubelet[2403]: W0412 18:30:53.618234 2403 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 12 18:30:53.618650 kubelet[2403]: W0412 18:30:53.618624 2403 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 12 18:30:53.699640 kubelet[2403]: I0412 18:30:53.699512 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32cf39eb5344122991b7bba21140e9d4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.3-a-63b2983992\" (UID: \"32cf39eb5344122991b7bba21140e9d4\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-63b2983992" Apr 12 18:30:53.699640 kubelet[2403]: I0412 18:30:53.699570 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/071c2731125a0f6248128f48edc3a4f8-ca-certs\") pod \"kube-apiserver-ci-3510.3.3-a-63b2983992\" (UID: \"071c2731125a0f6248128f48edc3a4f8\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-63b2983992" Apr 12 18:30:53.699640 kubelet[2403]: I0412 18:30:53.699606 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32cf39eb5344122991b7bba21140e9d4-ca-certs\") pod \"kube-controller-manager-ci-3510.3.3-a-63b2983992\" (UID: \"32cf39eb5344122991b7bba21140e9d4\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-63b2983992" Apr 12 18:30:53.699640 kubelet[2403]: I0412 18:30:53.699635 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/32cf39eb5344122991b7bba21140e9d4-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.3-a-63b2983992\" (UID: \"32cf39eb5344122991b7bba21140e9d4\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-63b2983992" Apr 12 18:30:53.699839 kubelet[2403]: I0412 18:30:53.699658 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32cf39eb5344122991b7bba21140e9d4-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.3-a-63b2983992\" (UID: \"32cf39eb5344122991b7bba21140e9d4\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-63b2983992" Apr 12 18:30:53.699839 kubelet[2403]: I0412 18:30:53.699677 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f4b4c63f50919a6d8c105f0faae2ca9-kubeconfig\") pod \"kube-scheduler-ci-3510.3.3-a-63b2983992\" (UID: \"7f4b4c63f50919a6d8c105f0faae2ca9\") " pod="kube-system/kube-scheduler-ci-3510.3.3-a-63b2983992" Apr 12 18:30:53.699839 kubelet[2403]: I0412 18:30:53.699700 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/071c2731125a0f6248128f48edc3a4f8-k8s-certs\") pod \"kube-apiserver-ci-3510.3.3-a-63b2983992\" (UID: \"071c2731125a0f6248128f48edc3a4f8\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-63b2983992" Apr 12 18:30:53.699839 kubelet[2403]: I0412 18:30:53.699719 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/071c2731125a0f6248128f48edc3a4f8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.3-a-63b2983992\" (UID: \"071c2731125a0f6248128f48edc3a4f8\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-63b2983992" Apr 12 18:30:53.699839 kubelet[2403]: I0412 18:30:53.699739 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32cf39eb5344122991b7bba21140e9d4-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.3-a-63b2983992\" (UID: \"32cf39eb5344122991b7bba21140e9d4\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-63b2983992" Apr 12 18:30:53.894981 sudo[2415]: pam_unix(sudo:session): session closed for user root Apr 12 18:30:54.257888 kubelet[2403]: I0412 18:30:54.257852 2403 apiserver.go:52] "Watching apiserver" Apr 12 18:30:54.295898 kubelet[2403]: I0412 18:30:54.295860 2403 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Apr 12 18:30:54.304360 kubelet[2403]: I0412 18:30:54.304329 2403 reconciler.go:41] "Reconciler: start to sync state" Apr 12 18:30:54.444088 kubelet[2403]: W0412 18:30:54.444050 2403 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 12 18:30:54.446773 kubelet[2403]: E0412 18:30:54.446734 2403 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.3-a-63b2983992\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.3-a-63b2983992" Apr 12 18:30:54.466352 kubelet[2403]: I0412 18:30:54.466315 2403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.3-a-63b2983992" podStartSLOduration=1.4662710350000001 podCreationTimestamp="2024-04-12 18:30:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:30:54.457812866 +0000 UTC m=+1.303653969" watchObservedRunningTime="2024-04-12 18:30:54.466271035 +0000 UTC m=+1.312112138" Apr 12 18:30:54.474734 kubelet[2403]: I0412 18:30:54.474694 2403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.3-a-63b2983992" podStartSLOduration=1.474653403 podCreationTimestamp="2024-04-12 18:30:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:30:54.466900795 +0000 UTC m=+1.312741898" watchObservedRunningTime="2024-04-12 18:30:54.474653403 +0000 UTC m=+1.320494506" Apr 12 18:30:54.486416 kubelet[2403]: I0412 18:30:54.486381 2403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.3-a-63b2983992" podStartSLOduration=1.486340375 podCreationTimestamp="2024-04-12 18:30:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:30:54.475605684 +0000 UTC m=+1.321446787" watchObservedRunningTime="2024-04-12 18:30:54.486340375 +0000 UTC m=+1.332181478" Apr 12 18:30:55.364402 sudo[1643]: pam_unix(sudo:session): session closed for user root Apr 12 18:30:55.459682 sshd[1640]: pam_unix(sshd:session): session closed for user core Apr 12 18:30:55.463749 systemd[1]: session-7.scope: Deactivated successfully. Apr 12 18:30:55.463966 systemd[1]: session-7.scope: Consumed 6.802s CPU time. Apr 12 18:30:55.464450 systemd-logind[1298]: Session 7 logged out. Waiting for processes to exit. Apr 12 18:30:55.464524 systemd[1]: sshd@4-10.200.20.17:22-10.200.12.6:42312.service: Deactivated successfully. Apr 12 18:30:55.467669 systemd-logind[1298]: Removed session 7. Apr 12 18:31:06.958114 kubelet[2403]: I0412 18:31:06.958082 2403 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 12 18:31:06.959188 env[1311]: time="2024-04-12T18:31:06.959134384Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 12 18:31:06.959764 kubelet[2403]: I0412 18:31:06.959739 2403 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 12 18:31:07.572108 kubelet[2403]: I0412 18:31:07.572045 2403 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:31:07.583008 kubelet[2403]: I0412 18:31:07.582931 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0267aa0e-2f04-4e46-acd4-4984e3c9e5a7-xtables-lock\") pod \"kube-proxy-ksn2r\" (UID: \"0267aa0e-2f04-4e46-acd4-4984e3c9e5a7\") " pod="kube-system/kube-proxy-ksn2r" Apr 12 18:31:07.583608 kubelet[2403]: I0412 18:31:07.582989 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p68bh\" (UniqueName: \"kubernetes.io/projected/0267aa0e-2f04-4e46-acd4-4984e3c9e5a7-kube-api-access-p68bh\") pod \"kube-proxy-ksn2r\" (UID: \"0267aa0e-2f04-4e46-acd4-4984e3c9e5a7\") " pod="kube-system/kube-proxy-ksn2r" Apr 12 18:31:07.583608 kubelet[2403]: I0412 18:31:07.583558 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0267aa0e-2f04-4e46-acd4-4984e3c9e5a7-kube-proxy\") pod \"kube-proxy-ksn2r\" (UID: \"0267aa0e-2f04-4e46-acd4-4984e3c9e5a7\") " pod="kube-system/kube-proxy-ksn2r" Apr 12 18:31:07.583727 kubelet[2403]: I0412 18:31:07.583613 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0267aa0e-2f04-4e46-acd4-4984e3c9e5a7-lib-modules\") pod \"kube-proxy-ksn2r\" (UID: \"0267aa0e-2f04-4e46-acd4-4984e3c9e5a7\") " pod="kube-system/kube-proxy-ksn2r" Apr 12 18:31:07.585925 systemd[1]: Created slice kubepods-besteffort-pod0267aa0e_2f04_4e46_acd4_4984e3c9e5a7.slice. Apr 12 18:31:07.607526 kubelet[2403]: I0412 18:31:07.607454 2403 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:31:07.613446 systemd[1]: Created slice kubepods-burstable-podf27e7037_5dda_40cf_a2ab_5d00492f2bb2.slice. Apr 12 18:31:07.684871 kubelet[2403]: I0412 18:31:07.684799 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-lib-modules\") pod \"cilium-lch5r\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " pod="kube-system/cilium-lch5r" Apr 12 18:31:07.684871 kubelet[2403]: I0412 18:31:07.684868 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-host-proc-sys-kernel\") pod \"cilium-lch5r\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " pod="kube-system/cilium-lch5r" Apr 12 18:31:07.685076 kubelet[2403]: I0412 18:31:07.684892 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-hubble-tls\") pod \"cilium-lch5r\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " pod="kube-system/cilium-lch5r" Apr 12 18:31:07.685076 kubelet[2403]: I0412 18:31:07.684928 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-bpf-maps\") pod \"cilium-lch5r\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " pod="kube-system/cilium-lch5r" Apr 12 18:31:07.685076 kubelet[2403]: I0412 18:31:07.684948 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-cni-path\") pod \"cilium-lch5r\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " pod="kube-system/cilium-lch5r" Apr 12 18:31:07.685076 kubelet[2403]: I0412 18:31:07.684969 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-host-proc-sys-net\") pod \"cilium-lch5r\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " pod="kube-system/cilium-lch5r" Apr 12 18:31:07.685076 kubelet[2403]: I0412 18:31:07.685014 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-hostproc\") pod \"cilium-lch5r\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " pod="kube-system/cilium-lch5r" Apr 12 18:31:07.685076 kubelet[2403]: I0412 18:31:07.685033 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-xtables-lock\") pod \"cilium-lch5r\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " pod="kube-system/cilium-lch5r" Apr 12 18:31:07.685222 kubelet[2403]: I0412 18:31:07.685067 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-etc-cni-netd\") pod \"cilium-lch5r\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " pod="kube-system/cilium-lch5r" Apr 12 18:31:07.685222 kubelet[2403]: I0412 18:31:07.685102 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-cilium-run\") pod \"cilium-lch5r\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " pod="kube-system/cilium-lch5r" Apr 12 18:31:07.685222 kubelet[2403]: I0412 18:31:07.685129 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-cilium-config-path\") pod \"cilium-lch5r\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " pod="kube-system/cilium-lch5r" Apr 12 18:31:07.685222 kubelet[2403]: I0412 18:31:07.685162 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvcdc\" (UniqueName: \"kubernetes.io/projected/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-kube-api-access-cvcdc\") pod \"cilium-lch5r\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " pod="kube-system/cilium-lch5r" Apr 12 18:31:07.685222 kubelet[2403]: I0412 18:31:07.685204 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-cilium-cgroup\") pod \"cilium-lch5r\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " pod="kube-system/cilium-lch5r" Apr 12 18:31:07.685340 kubelet[2403]: I0412 18:31:07.685234 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-clustermesh-secrets\") pod \"cilium-lch5r\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " pod="kube-system/cilium-lch5r" Apr 12 18:31:07.722616 kubelet[2403]: E0412 18:31:07.722512 2403 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 12 18:31:07.722616 kubelet[2403]: E0412 18:31:07.722611 2403 projected.go:198] Error preparing data for projected volume kube-api-access-p68bh for pod kube-system/kube-proxy-ksn2r: configmap "kube-root-ca.crt" not found Apr 12 18:31:07.722808 kubelet[2403]: E0412 18:31:07.722692 2403 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0267aa0e-2f04-4e46-acd4-4984e3c9e5a7-kube-api-access-p68bh podName:0267aa0e-2f04-4e46-acd4-4984e3c9e5a7 nodeName:}" failed. No retries permitted until 2024-04-12 18:31:08.222669693 +0000 UTC m=+15.068510756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p68bh" (UniqueName: "kubernetes.io/projected/0267aa0e-2f04-4e46-acd4-4984e3c9e5a7-kube-api-access-p68bh") pod "kube-proxy-ksn2r" (UID: "0267aa0e-2f04-4e46-acd4-4984e3c9e5a7") : configmap "kube-root-ca.crt" not found Apr 12 18:31:07.918503 env[1311]: time="2024-04-12T18:31:07.918363164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lch5r,Uid:f27e7037-5dda-40cf-a2ab-5d00492f2bb2,Namespace:kube-system,Attempt:0,}" Apr 12 18:31:07.939230 kubelet[2403]: I0412 18:31:07.939179 2403 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:31:07.951134 systemd[1]: Created slice kubepods-besteffort-pod59397919_f41f_4edf_be3d_740847a35d37.slice. Apr 12 18:31:07.952682 env[1311]: time="2024-04-12T18:31:07.952358471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:31:07.952682 env[1311]: time="2024-04-12T18:31:07.952398791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:31:07.952682 env[1311]: time="2024-04-12T18:31:07.952409311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:31:07.952682 env[1311]: time="2024-04-12T18:31:07.952533511Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df pid=2483 runtime=io.containerd.runc.v2 Apr 12 18:31:07.974175 systemd[1]: Started cri-containerd-62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df.scope. Apr 12 18:31:07.987874 kubelet[2403]: I0412 18:31:07.987839 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7wzg\" (UniqueName: \"kubernetes.io/projected/59397919-f41f-4edf-be3d-740847a35d37-kube-api-access-d7wzg\") pod \"cilium-operator-574c4bb98d-vq8fl\" (UID: \"59397919-f41f-4edf-be3d-740847a35d37\") " pod="kube-system/cilium-operator-574c4bb98d-vq8fl" Apr 12 18:31:07.988308 kubelet[2403]: I0412 18:31:07.988278 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59397919-f41f-4edf-be3d-740847a35d37-cilium-config-path\") pod \"cilium-operator-574c4bb98d-vq8fl\" (UID: \"59397919-f41f-4edf-be3d-740847a35d37\") " pod="kube-system/cilium-operator-574c4bb98d-vq8fl" Apr 12 18:31:08.008234 env[1311]: time="2024-04-12T18:31:08.008181114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lch5r,Uid:f27e7037-5dda-40cf-a2ab-5d00492f2bb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\"" Apr 12 18:31:08.010370 env[1311]: time="2024-04-12T18:31:08.010314875Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 12 18:31:08.257272 env[1311]: time="2024-04-12T18:31:08.256713261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-vq8fl,Uid:59397919-f41f-4edf-be3d-740847a35d37,Namespace:kube-system,Attempt:0,}" Apr 12 18:31:08.288686 env[1311]: time="2024-04-12T18:31:08.288557645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:31:08.288847 env[1311]: time="2024-04-12T18:31:08.288695245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:31:08.288847 env[1311]: time="2024-04-12T18:31:08.288723445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:31:08.288977 env[1311]: time="2024-04-12T18:31:08.288936446Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071 pid=2530 runtime=io.containerd.runc.v2 Apr 12 18:31:08.301910 systemd[1]: Started cri-containerd-979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071.scope. Apr 12 18:31:08.341195 env[1311]: time="2024-04-12T18:31:08.341135645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-vq8fl,Uid:59397919-f41f-4edf-be3d-740847a35d37,Namespace:kube-system,Attempt:0,} returns sandbox id \"979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071\"" Apr 12 18:31:08.496453 env[1311]: time="2024-04-12T18:31:08.496400322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ksn2r,Uid:0267aa0e-2f04-4e46-acd4-4984e3c9e5a7,Namespace:kube-system,Attempt:0,}" Apr 12 18:31:08.531746 env[1311]: time="2024-04-12T18:31:08.531209869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:31:08.531983 env[1311]: time="2024-04-12T18:31:08.531924629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:31:08.532078 env[1311]: time="2024-04-12T18:31:08.532057589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:31:08.532386 env[1311]: time="2024-04-12T18:31:08.532337909Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c80a1fbd9d11808f3121df3333bd3ce8701b59ba11156c79b1dd0df7b05154a3 pid=2574 runtime=io.containerd.runc.v2 Apr 12 18:31:08.545093 systemd[1]: Started cri-containerd-c80a1fbd9d11808f3121df3333bd3ce8701b59ba11156c79b1dd0df7b05154a3.scope. Apr 12 18:31:08.571819 env[1311]: time="2024-04-12T18:31:08.571754379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ksn2r,Uid:0267aa0e-2f04-4e46-acd4-4984e3c9e5a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c80a1fbd9d11808f3121df3333bd3ce8701b59ba11156c79b1dd0df7b05154a3\"" Apr 12 18:31:08.577542 env[1311]: time="2024-04-12T18:31:08.577490504Z" level=info msg="CreateContainer within sandbox \"c80a1fbd9d11808f3121df3333bd3ce8701b59ba11156c79b1dd0df7b05154a3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 12 18:31:08.624222 env[1311]: time="2024-04-12T18:31:08.624160819Z" level=info msg="CreateContainer within sandbox \"c80a1fbd9d11808f3121df3333bd3ce8701b59ba11156c79b1dd0df7b05154a3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"aebfd6eb3314c2f7bdcd83aa3774dcca63d28189cd86627a495e4a2ec8c65072\"" Apr 12 18:31:08.627152 env[1311]: time="2024-04-12T18:31:08.625861660Z" level=info msg="StartContainer for \"aebfd6eb3314c2f7bdcd83aa3774dcca63d28189cd86627a495e4a2ec8c65072\"" Apr 12 18:31:08.642712 systemd[1]: Started cri-containerd-aebfd6eb3314c2f7bdcd83aa3774dcca63d28189cd86627a495e4a2ec8c65072.scope. Apr 12 18:31:08.678301 env[1311]: time="2024-04-12T18:31:08.678239020Z" level=info msg="StartContainer for \"aebfd6eb3314c2f7bdcd83aa3774dcca63d28189cd86627a495e4a2ec8c65072\" returns successfully" Apr 12 18:31:12.517346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1396140377.mount: Deactivated successfully. Apr 12 18:31:15.602663 env[1311]: time="2024-04-12T18:31:15.602566160Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:15.608492 env[1311]: time="2024-04-12T18:31:15.608436124Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:15.613113 env[1311]: time="2024-04-12T18:31:15.613063567Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:15.613856 env[1311]: time="2024-04-12T18:31:15.613821368Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 12 18:31:15.616086 env[1311]: time="2024-04-12T18:31:15.616043169Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 12 18:31:15.617568 env[1311]: time="2024-04-12T18:31:15.617453570Z" level=info msg="CreateContainer within sandbox \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:31:15.643653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2762092427.mount: Deactivated successfully. Apr 12 18:31:15.649365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476421458.mount: Deactivated successfully. Apr 12 18:31:15.661211 env[1311]: time="2024-04-12T18:31:15.661150919Z" level=info msg="CreateContainer within sandbox \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa\"" Apr 12 18:31:15.664337 env[1311]: time="2024-04-12T18:31:15.662466000Z" level=info msg="StartContainer for \"47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa\"" Apr 12 18:31:15.681843 systemd[1]: Started cri-containerd-47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa.scope. Apr 12 18:31:15.717658 env[1311]: time="2024-04-12T18:31:15.717556956Z" level=info msg="StartContainer for \"47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa\" returns successfully" Apr 12 18:31:15.726096 systemd[1]: cri-containerd-47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa.scope: Deactivated successfully. Apr 12 18:31:16.372892 env[1311]: time="2024-04-12T18:31:16.372844341Z" level=info msg="shim disconnected" id=47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa Apr 12 18:31:16.373331 env[1311]: time="2024-04-12T18:31:16.373305982Z" level=warning msg="cleaning up after shim disconnected" id=47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa namespace=k8s.io Apr 12 18:31:16.373442 env[1311]: time="2024-04-12T18:31:16.373427862Z" level=info msg="cleaning up dead shim" Apr 12 18:31:16.380865 env[1311]: time="2024-04-12T18:31:16.380824106Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:31:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2808 runtime=io.containerd.runc.v2\n" Apr 12 18:31:16.477467 env[1311]: time="2024-04-12T18:31:16.477420809Z" level=info msg="CreateContainer within sandbox \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:31:16.493473 kubelet[2403]: I0412 18:31:16.493372 2403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ksn2r" podStartSLOduration=9.493281579 podCreationTimestamp="2024-04-12 18:31:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:31:09.475840615 +0000 UTC m=+16.321681678" watchObservedRunningTime="2024-04-12 18:31:16.493281579 +0000 UTC m=+23.339122682" Apr 12 18:31:16.522083 env[1311]: time="2024-04-12T18:31:16.522022077Z" level=info msg="CreateContainer within sandbox \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42\"" Apr 12 18:31:16.522916 env[1311]: time="2024-04-12T18:31:16.522822798Z" level=info msg="StartContainer for \"bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42\"" Apr 12 18:31:16.539591 systemd[1]: Started cri-containerd-bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42.scope. Apr 12 18:31:16.575322 env[1311]: time="2024-04-12T18:31:16.575276272Z" level=info msg="StartContainer for \"bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42\" returns successfully" Apr 12 18:31:16.581328 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:31:16.581608 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:31:16.581815 systemd[1]: Stopping systemd-sysctl.service... Apr 12 18:31:16.584933 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:31:16.590328 systemd[1]: cri-containerd-bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42.scope: Deactivated successfully. Apr 12 18:31:16.596149 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:31:16.627034 env[1311]: time="2024-04-12T18:31:16.626510825Z" level=info msg="shim disconnected" id=bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42 Apr 12 18:31:16.627034 env[1311]: time="2024-04-12T18:31:16.626565065Z" level=warning msg="cleaning up after shim disconnected" id=bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42 namespace=k8s.io Apr 12 18:31:16.627034 env[1311]: time="2024-04-12T18:31:16.626640305Z" level=info msg="cleaning up dead shim" Apr 12 18:31:16.634405 env[1311]: time="2024-04-12T18:31:16.634351910Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:31:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2873 runtime=io.containerd.runc.v2\n" Apr 12 18:31:16.641321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa-rootfs.mount: Deactivated successfully. Apr 12 18:31:17.485153 env[1311]: time="2024-04-12T18:31:17.485107412Z" level=info msg="CreateContainer within sandbox \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:31:17.495488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1890624333.mount: Deactivated successfully. Apr 12 18:31:17.531191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1900575489.mount: Deactivated successfully. Apr 12 18:31:17.551500 env[1311]: time="2024-04-12T18:31:17.551453214Z" level=info msg="CreateContainer within sandbox \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8\"" Apr 12 18:31:17.553810 env[1311]: time="2024-04-12T18:31:17.553767495Z" level=info msg="StartContainer for \"4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8\"" Apr 12 18:31:17.576944 systemd[1]: Started cri-containerd-4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8.scope. Apr 12 18:31:17.616981 systemd[1]: cri-containerd-4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8.scope: Deactivated successfully. Apr 12 18:31:17.621098 env[1311]: time="2024-04-12T18:31:17.621035698Z" level=info msg="StartContainer for \"4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8\" returns successfully" Apr 12 18:31:17.653744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8-rootfs.mount: Deactivated successfully. Apr 12 18:31:17.676937 env[1311]: time="2024-04-12T18:31:17.676890373Z" level=info msg="shim disconnected" id=4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8 Apr 12 18:31:17.677359 env[1311]: time="2024-04-12T18:31:17.677335893Z" level=warning msg="cleaning up after shim disconnected" id=4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8 namespace=k8s.io Apr 12 18:31:17.677448 env[1311]: time="2024-04-12T18:31:17.677434693Z" level=info msg="cleaning up dead shim" Apr 12 18:31:17.695472 env[1311]: time="2024-04-12T18:31:17.695426145Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:31:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2935 runtime=io.containerd.runc.v2\n" Apr 12 18:31:18.120971 env[1311]: time="2024-04-12T18:31:18.120920092Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:18.127991 env[1311]: time="2024-04-12T18:31:18.127944936Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:18.132636 env[1311]: time="2024-04-12T18:31:18.132565339Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:18.133334 env[1311]: time="2024-04-12T18:31:18.133298540Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 12 18:31:18.137947 env[1311]: time="2024-04-12T18:31:18.137901183Z" level=info msg="CreateContainer within sandbox \"979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 12 18:31:18.175881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2005432653.mount: Deactivated successfully. Apr 12 18:31:18.189037 env[1311]: time="2024-04-12T18:31:18.188975014Z" level=info msg="CreateContainer within sandbox \"979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144\"" Apr 12 18:31:18.191099 env[1311]: time="2024-04-12T18:31:18.189915215Z" level=info msg="StartContainer for \"68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144\"" Apr 12 18:31:18.207756 systemd[1]: Started cri-containerd-68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144.scope. Apr 12 18:31:18.242729 env[1311]: time="2024-04-12T18:31:18.242569848Z" level=info msg="StartContainer for \"68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144\" returns successfully" Apr 12 18:31:18.488915 env[1311]: time="2024-04-12T18:31:18.488807560Z" level=info msg="CreateContainer within sandbox \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:31:18.524035 kubelet[2403]: I0412 18:31:18.523894 2403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-vq8fl" podStartSLOduration=1.732771448 podCreationTimestamp="2024-04-12 18:31:07 +0000 UTC" firstStartedPulling="2024-04-12 18:31:08.342543926 +0000 UTC m=+15.188385029" lastFinishedPulling="2024-04-12 18:31:18.1336257 +0000 UTC m=+24.979466803" observedRunningTime="2024-04-12 18:31:18.497927886 +0000 UTC m=+25.343768989" watchObservedRunningTime="2024-04-12 18:31:18.523853222 +0000 UTC m=+25.369694325" Apr 12 18:31:18.525621 env[1311]: time="2024-04-12T18:31:18.525527143Z" level=info msg="CreateContainer within sandbox \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8\"" Apr 12 18:31:18.528270 env[1311]: time="2024-04-12T18:31:18.528223425Z" level=info msg="StartContainer for \"696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8\"" Apr 12 18:31:18.562478 systemd[1]: Started cri-containerd-696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8.scope. Apr 12 18:31:18.599727 systemd[1]: cri-containerd-696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8.scope: Deactivated successfully. Apr 12 18:31:18.600754 env[1311]: time="2024-04-12T18:31:18.600707070Z" level=info msg="StartContainer for \"696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8\" returns successfully" Apr 12 18:31:18.641889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2505130192.mount: Deactivated successfully. Apr 12 18:31:18.861311 env[1311]: time="2024-04-12T18:31:18.861249071Z" level=info msg="shim disconnected" id=696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8 Apr 12 18:31:18.861311 env[1311]: time="2024-04-12T18:31:18.861303271Z" level=warning msg="cleaning up after shim disconnected" id=696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8 namespace=k8s.io Apr 12 18:31:18.861311 env[1311]: time="2024-04-12T18:31:18.861313151Z" level=info msg="cleaning up dead shim" Apr 12 18:31:18.877068 env[1311]: time="2024-04-12T18:31:18.877019561Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:31:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3025 runtime=io.containerd.runc.v2\n" Apr 12 18:31:19.492247 env[1311]: time="2024-04-12T18:31:19.492194937Z" level=info msg="CreateContainer within sandbox \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:31:19.534587 env[1311]: time="2024-04-12T18:31:19.534505323Z" level=info msg="CreateContainer within sandbox \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c\"" Apr 12 18:31:19.535765 env[1311]: time="2024-04-12T18:31:19.535695284Z" level=info msg="StartContainer for \"6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c\"" Apr 12 18:31:19.559333 systemd[1]: Started cri-containerd-6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c.scope. Apr 12 18:31:19.601091 env[1311]: time="2024-04-12T18:31:19.601022563Z" level=info msg="StartContainer for \"6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c\" returns successfully" Apr 12 18:31:19.694597 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Apr 12 18:31:19.713599 kubelet[2403]: I0412 18:31:19.713400 2403 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Apr 12 18:31:19.750967 kubelet[2403]: I0412 18:31:19.750850 2403 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:31:19.757008 systemd[1]: Created slice kubepods-burstable-pod3a1ca717_9816_43f2_89dd_47eed0e75a9c.slice. Apr 12 18:31:19.760951 kubelet[2403]: I0412 18:31:19.760912 2403 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:31:19.766271 systemd[1]: Created slice kubepods-burstable-pod406765e6_91ac_493d_be8b_9de75e6cc5df.slice. Apr 12 18:31:19.771502 kubelet[2403]: I0412 18:31:19.771458 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/406765e6-91ac-493d-be8b-9de75e6cc5df-config-volume\") pod \"coredns-5d78c9869d-dhdr9\" (UID: \"406765e6-91ac-493d-be8b-9de75e6cc5df\") " pod="kube-system/coredns-5d78c9869d-dhdr9" Apr 12 18:31:19.771744 kubelet[2403]: I0412 18:31:19.771729 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdbfj\" (UniqueName: \"kubernetes.io/projected/406765e6-91ac-493d-be8b-9de75e6cc5df-kube-api-access-rdbfj\") pod \"coredns-5d78c9869d-dhdr9\" (UID: \"406765e6-91ac-493d-be8b-9de75e6cc5df\") " pod="kube-system/coredns-5d78c9869d-dhdr9" Apr 12 18:31:19.771849 kubelet[2403]: I0412 18:31:19.771838 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a1ca717-9816-43f2-89dd-47eed0e75a9c-config-volume\") pod \"coredns-5d78c9869d-7b76m\" (UID: \"3a1ca717-9816-43f2-89dd-47eed0e75a9c\") " pod="kube-system/coredns-5d78c9869d-7b76m" Apr 12 18:31:19.771945 kubelet[2403]: I0412 18:31:19.771934 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26gx8\" (UniqueName: \"kubernetes.io/projected/3a1ca717-9816-43f2-89dd-47eed0e75a9c-kube-api-access-26gx8\") pod \"coredns-5d78c9869d-7b76m\" (UID: \"3a1ca717-9816-43f2-89dd-47eed0e75a9c\") " pod="kube-system/coredns-5d78c9869d-7b76m" Apr 12 18:31:20.063376 env[1311]: time="2024-04-12T18:31:20.063207564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-7b76m,Uid:3a1ca717-9816-43f2-89dd-47eed0e75a9c,Namespace:kube-system,Attempt:0,}" Apr 12 18:31:20.070612 env[1311]: time="2024-04-12T18:31:20.070541528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-dhdr9,Uid:406765e6-91ac-493d-be8b-9de75e6cc5df,Namespace:kube-system,Attempt:0,}" Apr 12 18:31:20.515616 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Apr 12 18:31:22.158689 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Apr 12 18:31:22.159087 systemd-networkd[1458]: cilium_host: Link UP Apr 12 18:31:22.159213 systemd-networkd[1458]: cilium_net: Link UP Apr 12 18:31:22.159215 systemd-networkd[1458]: cilium_net: Gained carrier Apr 12 18:31:22.159324 systemd-networkd[1458]: cilium_host: Gained carrier Apr 12 18:31:22.161824 systemd-networkd[1458]: cilium_host: Gained IPv6LL Apr 12 18:31:22.363439 systemd-networkd[1458]: cilium_vxlan: Link UP Apr 12 18:31:22.363450 systemd-networkd[1458]: cilium_vxlan: Gained carrier Apr 12 18:31:22.681606 kernel: NET: Registered PF_ALG protocol family Apr 12 18:31:22.963759 systemd-networkd[1458]: cilium_net: Gained IPv6LL Apr 12 18:31:23.480873 systemd-networkd[1458]: lxc_health: Link UP Apr 12 18:31:23.502851 systemd-networkd[1458]: lxc_health: Gained carrier Apr 12 18:31:23.503623 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:31:23.603722 systemd-networkd[1458]: cilium_vxlan: Gained IPv6LL Apr 12 18:31:23.657993 systemd-networkd[1458]: lxc9f65b74c2e57: Link UP Apr 12 18:31:23.668941 kernel: eth0: renamed from tmpae244 Apr 12 18:31:23.682554 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9f65b74c2e57: link becomes ready Apr 12 18:31:23.679493 systemd-networkd[1458]: lxc9f65b74c2e57: Gained carrier Apr 12 18:31:23.937779 kubelet[2403]: I0412 18:31:23.937735 2403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-lch5r" podStartSLOduration=9.332580408 podCreationTimestamp="2024-04-12 18:31:07 +0000 UTC" firstStartedPulling="2024-04-12 18:31:08.009713275 +0000 UTC m=+14.855554378" lastFinishedPulling="2024-04-12 18:31:15.614827808 +0000 UTC m=+22.460668911" observedRunningTime="2024-04-12 18:31:20.521641518 +0000 UTC m=+27.367482621" watchObservedRunningTime="2024-04-12 18:31:23.937694941 +0000 UTC m=+30.783536004" Apr 12 18:31:24.143758 systemd-networkd[1458]: lxc9f3dcb7ed48e: Link UP Apr 12 18:31:24.151601 kernel: eth0: renamed from tmp85a7d Apr 12 18:31:24.161647 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9f3dcb7ed48e: link becomes ready Apr 12 18:31:24.161672 systemd-networkd[1458]: lxc9f3dcb7ed48e: Gained carrier Apr 12 18:31:24.947710 systemd-networkd[1458]: lxc9f65b74c2e57: Gained IPv6LL Apr 12 18:31:25.203725 systemd-networkd[1458]: lxc_health: Gained IPv6LL Apr 12 18:31:25.715741 systemd-networkd[1458]: lxc9f3dcb7ed48e: Gained IPv6LL Apr 12 18:31:27.857923 env[1311]: time="2024-04-12T18:31:27.857820876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:31:27.857923 env[1311]: time="2024-04-12T18:31:27.857919116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:31:27.858298 env[1311]: time="2024-04-12T18:31:27.857947116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:31:27.858298 env[1311]: time="2024-04-12T18:31:27.858194316Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/85a7d7181b9e3e31ab8f672cd5d4a10f215b7d00ee43372286e96795afb12bf1 pid=3572 runtime=io.containerd.runc.v2 Apr 12 18:31:27.899422 systemd[1]: run-containerd-runc-k8s.io-85a7d7181b9e3e31ab8f672cd5d4a10f215b7d00ee43372286e96795afb12bf1-runc.AcwQb7.mount: Deactivated successfully. Apr 12 18:31:27.905381 systemd[1]: Started cri-containerd-85a7d7181b9e3e31ab8f672cd5d4a10f215b7d00ee43372286e96795afb12bf1.scope. Apr 12 18:31:27.925751 env[1311]: time="2024-04-12T18:31:27.920901189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:31:27.925751 env[1311]: time="2024-04-12T18:31:27.920941350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:31:27.925751 env[1311]: time="2024-04-12T18:31:27.920951990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:31:27.925751 env[1311]: time="2024-04-12T18:31:27.921069310Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae2443ba4a264c30115f612855cc59f98e290763db32a9e58ed30e1301990bc0 pid=3603 runtime=io.containerd.runc.v2 Apr 12 18:31:27.947836 systemd[1]: Started cri-containerd-ae2443ba4a264c30115f612855cc59f98e290763db32a9e58ed30e1301990bc0.scope. Apr 12 18:31:27.977879 env[1311]: time="2024-04-12T18:31:27.977834300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-7b76m,Uid:3a1ca717-9816-43f2-89dd-47eed0e75a9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"85a7d7181b9e3e31ab8f672cd5d4a10f215b7d00ee43372286e96795afb12bf1\"" Apr 12 18:31:27.981891 env[1311]: time="2024-04-12T18:31:27.981840822Z" level=info msg="CreateContainer within sandbox \"85a7d7181b9e3e31ab8f672cd5d4a10f215b7d00ee43372286e96795afb12bf1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:31:28.008368 env[1311]: time="2024-04-12T18:31:28.008317516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-dhdr9,Uid:406765e6-91ac-493d-be8b-9de75e6cc5df,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae2443ba4a264c30115f612855cc59f98e290763db32a9e58ed30e1301990bc0\"" Apr 12 18:31:28.014797 env[1311]: time="2024-04-12T18:31:28.014080919Z" level=info msg="CreateContainer within sandbox \"ae2443ba4a264c30115f612855cc59f98e290763db32a9e58ed30e1301990bc0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:31:28.019363 env[1311]: time="2024-04-12T18:31:28.019307282Z" level=info msg="CreateContainer within sandbox \"85a7d7181b9e3e31ab8f672cd5d4a10f215b7d00ee43372286e96795afb12bf1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"85d2a4630f0f3074cb51e47a106a81d6d8be6e63d27a345975bdf3e1e23ff7d5\"" Apr 12 18:31:28.020275 env[1311]: time="2024-04-12T18:31:28.020242042Z" level=info msg="StartContainer for \"85d2a4630f0f3074cb51e47a106a81d6d8be6e63d27a345975bdf3e1e23ff7d5\"" Apr 12 18:31:28.042096 systemd[1]: Started cri-containerd-85d2a4630f0f3074cb51e47a106a81d6d8be6e63d27a345975bdf3e1e23ff7d5.scope. Apr 12 18:31:28.065161 env[1311]: time="2024-04-12T18:31:28.065107825Z" level=info msg="CreateContainer within sandbox \"ae2443ba4a264c30115f612855cc59f98e290763db32a9e58ed30e1301990bc0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"74e3b6395d7ffaebf41e340c191b9b9960af57007b96d0688e6821f6afa6fb0d\"" Apr 12 18:31:28.066141 env[1311]: time="2024-04-12T18:31:28.066095186Z" level=info msg="StartContainer for \"74e3b6395d7ffaebf41e340c191b9b9960af57007b96d0688e6821f6afa6fb0d\"" Apr 12 18:31:28.087553 env[1311]: time="2024-04-12T18:31:28.087487317Z" level=info msg="StartContainer for \"85d2a4630f0f3074cb51e47a106a81d6d8be6e63d27a345975bdf3e1e23ff7d5\" returns successfully" Apr 12 18:31:28.102917 systemd[1]: Started cri-containerd-74e3b6395d7ffaebf41e340c191b9b9960af57007b96d0688e6821f6afa6fb0d.scope. Apr 12 18:31:28.165636 env[1311]: time="2024-04-12T18:31:28.165500238Z" level=info msg="StartContainer for \"74e3b6395d7ffaebf41e340c191b9b9960af57007b96d0688e6821f6afa6fb0d\" returns successfully" Apr 12 18:31:28.524908 kubelet[2403]: I0412 18:31:28.524785 2403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-dhdr9" podStartSLOduration=21.524746545 podCreationTimestamp="2024-04-12 18:31:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:31:28.524736625 +0000 UTC m=+35.370577728" watchObservedRunningTime="2024-04-12 18:31:28.524746545 +0000 UTC m=+35.370587648" Apr 12 18:33:54.050335 systemd[1]: Started sshd@5-10.200.20.17:22-10.200.12.6:60264.service. Apr 12 18:33:54.493420 sshd[3746]: Accepted publickey for core from 10.200.12.6 port 60264 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:33:54.495424 sshd[3746]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:33:54.499655 systemd-logind[1298]: New session 8 of user core. Apr 12 18:33:54.500386 systemd[1]: Started session-8.scope. Apr 12 18:33:54.895777 sshd[3746]: pam_unix(sshd:session): session closed for user core Apr 12 18:33:54.898626 systemd-logind[1298]: Session 8 logged out. Waiting for processes to exit. Apr 12 18:33:54.898785 systemd[1]: session-8.scope: Deactivated successfully. Apr 12 18:33:54.899585 systemd[1]: sshd@5-10.200.20.17:22-10.200.12.6:60264.service: Deactivated successfully. Apr 12 18:33:54.901081 systemd-logind[1298]: Removed session 8. Apr 12 18:33:59.968493 systemd[1]: Started sshd@6-10.200.20.17:22-10.200.12.6:47200.service. Apr 12 18:34:00.384712 sshd[3759]: Accepted publickey for core from 10.200.12.6 port 47200 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:00.386470 sshd[3759]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:00.391530 systemd[1]: Started session-9.scope. Apr 12 18:34:00.391937 systemd-logind[1298]: New session 9 of user core. Apr 12 18:34:00.753479 sshd[3759]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:00.756639 systemd[1]: sshd@6-10.200.20.17:22-10.200.12.6:47200.service: Deactivated successfully. Apr 12 18:34:00.757709 systemd[1]: session-9.scope: Deactivated successfully. Apr 12 18:34:00.759042 systemd-logind[1298]: Session 9 logged out. Waiting for processes to exit. Apr 12 18:34:00.760257 systemd-logind[1298]: Removed session 9. Apr 12 18:34:05.824918 systemd[1]: Started sshd@7-10.200.20.17:22-10.200.12.6:46242.service. Apr 12 18:34:06.241301 sshd[3771]: Accepted publickey for core from 10.200.12.6 port 46242 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:06.243086 sshd[3771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:06.246992 systemd-logind[1298]: New session 10 of user core. Apr 12 18:34:06.249460 systemd[1]: Started session-10.scope. Apr 12 18:34:06.606167 sshd[3771]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:06.609013 systemd[1]: sshd@7-10.200.20.17:22-10.200.12.6:46242.service: Deactivated successfully. Apr 12 18:34:06.609889 systemd[1]: session-10.scope: Deactivated successfully. Apr 12 18:34:06.610555 systemd-logind[1298]: Session 10 logged out. Waiting for processes to exit. Apr 12 18:34:06.611394 systemd-logind[1298]: Removed session 10. Apr 12 18:34:11.676238 systemd[1]: Started sshd@8-10.200.20.17:22-10.200.12.6:46250.service. Apr 12 18:34:12.088037 sshd[3785]: Accepted publickey for core from 10.200.12.6 port 46250 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:12.089773 sshd[3785]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:12.093713 systemd-logind[1298]: New session 11 of user core. Apr 12 18:34:12.095141 systemd[1]: Started session-11.scope. Apr 12 18:34:12.449739 sshd[3785]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:12.452411 systemd[1]: sshd@8-10.200.20.17:22-10.200.12.6:46250.service: Deactivated successfully. Apr 12 18:34:12.453187 systemd[1]: session-11.scope: Deactivated successfully. Apr 12 18:34:12.453917 systemd-logind[1298]: Session 11 logged out. Waiting for processes to exit. Apr 12 18:34:12.454974 systemd-logind[1298]: Removed session 11. Apr 12 18:34:12.521856 systemd[1]: Started sshd@9-10.200.20.17:22-10.200.12.6:46262.service. Apr 12 18:34:12.933755 sshd[3799]: Accepted publickey for core from 10.200.12.6 port 46262 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:12.935465 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:12.939376 systemd-logind[1298]: New session 12 of user core. Apr 12 18:34:12.942741 systemd[1]: Started session-12.scope. Apr 12 18:34:13.943173 sshd[3799]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:13.946256 systemd[1]: sshd@9-10.200.20.17:22-10.200.12.6:46262.service: Deactivated successfully. Apr 12 18:34:13.947157 systemd[1]: session-12.scope: Deactivated successfully. Apr 12 18:34:13.948150 systemd-logind[1298]: Session 12 logged out. Waiting for processes to exit. Apr 12 18:34:13.949069 systemd-logind[1298]: Removed session 12. Apr 12 18:34:14.012179 systemd[1]: Started sshd@10-10.200.20.17:22-10.200.12.6:46268.service. Apr 12 18:34:14.420502 sshd[3810]: Accepted publickey for core from 10.200.12.6 port 46268 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:14.422110 sshd[3810]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:14.426997 systemd[1]: Started session-13.scope. Apr 12 18:34:14.428369 systemd-logind[1298]: New session 13 of user core. Apr 12 18:34:14.787099 sshd[3810]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:14.789869 systemd[1]: sshd@10-10.200.20.17:22-10.200.12.6:46268.service: Deactivated successfully. Apr 12 18:34:14.790682 systemd[1]: session-13.scope: Deactivated successfully. Apr 12 18:34:14.791265 systemd-logind[1298]: Session 13 logged out. Waiting for processes to exit. Apr 12 18:34:14.791980 systemd-logind[1298]: Removed session 13. Apr 12 18:34:19.857490 systemd[1]: Started sshd@11-10.200.20.17:22-10.200.12.6:44728.service. Apr 12 18:34:20.275481 sshd[3823]: Accepted publickey for core from 10.200.12.6 port 44728 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:20.276661 sshd[3823]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:20.283657 systemd[1]: Started session-14.scope. Apr 12 18:34:20.284003 systemd-logind[1298]: New session 14 of user core. Apr 12 18:34:20.651856 sshd[3823]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:20.654998 systemd[1]: sshd@11-10.200.20.17:22-10.200.12.6:44728.service: Deactivated successfully. Apr 12 18:34:20.655712 systemd-logind[1298]: Session 14 logged out. Waiting for processes to exit. Apr 12 18:34:20.655841 systemd[1]: session-14.scope: Deactivated successfully. Apr 12 18:34:20.656804 systemd-logind[1298]: Removed session 14. Apr 12 18:34:25.725551 systemd[1]: Started sshd@12-10.200.20.17:22-10.200.12.6:54024.service. Apr 12 18:34:26.136306 sshd[3838]: Accepted publickey for core from 10.200.12.6 port 54024 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:26.138235 sshd[3838]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:26.142640 systemd-logind[1298]: New session 15 of user core. Apr 12 18:34:26.143206 systemd[1]: Started session-15.scope. Apr 12 18:34:26.511300 sshd[3838]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:26.514779 systemd-logind[1298]: Session 15 logged out. Waiting for processes to exit. Apr 12 18:34:26.514951 systemd[1]: sshd@12-10.200.20.17:22-10.200.12.6:54024.service: Deactivated successfully. Apr 12 18:34:26.515734 systemd[1]: session-15.scope: Deactivated successfully. Apr 12 18:34:26.516643 systemd-logind[1298]: Removed session 15. Apr 12 18:34:26.583036 systemd[1]: Started sshd@13-10.200.20.17:22-10.200.12.6:54034.service. Apr 12 18:34:26.998640 sshd[3850]: Accepted publickey for core from 10.200.12.6 port 54034 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:27.000353 sshd[3850]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:27.004699 systemd-logind[1298]: New session 16 of user core. Apr 12 18:34:27.005247 systemd[1]: Started session-16.scope. Apr 12 18:34:27.403388 sshd[3850]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:27.406212 systemd[1]: session-16.scope: Deactivated successfully. Apr 12 18:34:27.406830 systemd[1]: sshd@13-10.200.20.17:22-10.200.12.6:54034.service: Deactivated successfully. Apr 12 18:34:27.407796 systemd-logind[1298]: Session 16 logged out. Waiting for processes to exit. Apr 12 18:34:27.408860 systemd-logind[1298]: Removed session 16. Apr 12 18:34:27.474678 systemd[1]: Started sshd@14-10.200.20.17:22-10.200.12.6:54036.service. Apr 12 18:34:27.891949 sshd[3860]: Accepted publickey for core from 10.200.12.6 port 54036 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:27.893745 sshd[3860]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:27.897950 systemd-logind[1298]: New session 17 of user core. Apr 12 18:34:27.898508 systemd[1]: Started session-17.scope. Apr 12 18:34:28.988903 sshd[3860]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:28.992076 systemd[1]: sshd@14-10.200.20.17:22-10.200.12.6:54036.service: Deactivated successfully. Apr 12 18:34:28.992897 systemd[1]: session-17.scope: Deactivated successfully. Apr 12 18:34:28.993503 systemd-logind[1298]: Session 17 logged out. Waiting for processes to exit. Apr 12 18:34:28.994300 systemd-logind[1298]: Removed session 17. Apr 12 18:34:29.059474 systemd[1]: Started sshd@15-10.200.20.17:22-10.200.12.6:54042.service. Apr 12 18:34:29.476212 sshd[3878]: Accepted publickey for core from 10.200.12.6 port 54042 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:29.477960 sshd[3878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:29.482744 systemd[1]: Started session-18.scope. Apr 12 18:34:29.483514 systemd-logind[1298]: New session 18 of user core. Apr 12 18:34:30.037857 sshd[3878]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:30.040513 systemd[1]: sshd@15-10.200.20.17:22-10.200.12.6:54042.service: Deactivated successfully. Apr 12 18:34:30.041421 systemd[1]: session-18.scope: Deactivated successfully. Apr 12 18:34:30.042194 systemd-logind[1298]: Session 18 logged out. Waiting for processes to exit. Apr 12 18:34:30.043474 systemd-logind[1298]: Removed session 18. Apr 12 18:34:30.112001 systemd[1]: Started sshd@16-10.200.20.17:22-10.200.12.6:54048.service. Apr 12 18:34:30.529185 sshd[3889]: Accepted publickey for core from 10.200.12.6 port 54048 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:30.531856 sshd[3889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:30.536770 systemd[1]: Started session-19.scope. Apr 12 18:34:30.537285 systemd-logind[1298]: New session 19 of user core. Apr 12 18:34:30.908833 sshd[3889]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:30.912217 systemd[1]: sshd@16-10.200.20.17:22-10.200.12.6:54048.service: Deactivated successfully. Apr 12 18:34:30.912972 systemd[1]: session-19.scope: Deactivated successfully. Apr 12 18:34:30.913659 systemd-logind[1298]: Session 19 logged out. Waiting for processes to exit. Apr 12 18:34:30.914428 systemd-logind[1298]: Removed session 19. Apr 12 18:34:35.981766 systemd[1]: Started sshd@17-10.200.20.17:22-10.200.12.6:59376.service. Apr 12 18:34:36.395429 sshd[3905]: Accepted publickey for core from 10.200.12.6 port 59376 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:36.397143 sshd[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:36.401666 systemd[1]: Started session-20.scope. Apr 12 18:34:36.403061 systemd-logind[1298]: New session 20 of user core. Apr 12 18:34:36.759496 sshd[3905]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:36.762921 systemd[1]: sshd@17-10.200.20.17:22-10.200.12.6:59376.service: Deactivated successfully. Apr 12 18:34:36.763752 systemd[1]: session-20.scope: Deactivated successfully. Apr 12 18:34:36.764404 systemd-logind[1298]: Session 20 logged out. Waiting for processes to exit. Apr 12 18:34:36.765369 systemd-logind[1298]: Removed session 20. Apr 12 18:34:41.835958 systemd[1]: Started sshd@18-10.200.20.17:22-10.200.12.6:59386.service. Apr 12 18:34:42.246586 sshd[3924]: Accepted publickey for core from 10.200.12.6 port 59386 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:42.248326 sshd[3924]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:42.253034 systemd[1]: Started session-21.scope. Apr 12 18:34:42.253528 systemd-logind[1298]: New session 21 of user core. Apr 12 18:34:42.618165 sshd[3924]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:42.620857 systemd[1]: sshd@18-10.200.20.17:22-10.200.12.6:59386.service: Deactivated successfully. Apr 12 18:34:42.621695 systemd[1]: session-21.scope: Deactivated successfully. Apr 12 18:34:42.622323 systemd-logind[1298]: Session 21 logged out. Waiting for processes to exit. Apr 12 18:34:42.623106 systemd-logind[1298]: Removed session 21. Apr 12 18:34:47.688831 systemd[1]: Started sshd@19-10.200.20.17:22-10.200.12.6:54126.service. Apr 12 18:34:48.099996 sshd[3937]: Accepted publickey for core from 10.200.12.6 port 54126 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:48.101447 sshd[3937]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:48.106496 systemd[1]: Started session-22.scope. Apr 12 18:34:48.106863 systemd-logind[1298]: New session 22 of user core. Apr 12 18:34:48.481886 sshd[3937]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:48.485012 systemd[1]: session-22.scope: Deactivated successfully. Apr 12 18:34:48.485766 systemd-logind[1298]: Session 22 logged out. Waiting for processes to exit. Apr 12 18:34:48.485913 systemd[1]: sshd@19-10.200.20.17:22-10.200.12.6:54126.service: Deactivated successfully. Apr 12 18:34:48.487118 systemd-logind[1298]: Removed session 22. Apr 12 18:34:48.557651 systemd[1]: Started sshd@20-10.200.20.17:22-10.200.12.6:54136.service. Apr 12 18:34:49.001988 sshd[3949]: Accepted publickey for core from 10.200.12.6 port 54136 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:49.003766 sshd[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:49.007891 systemd-logind[1298]: New session 23 of user core. Apr 12 18:34:49.008389 systemd[1]: Started session-23.scope. Apr 12 18:34:51.927728 kubelet[2403]: I0412 18:34:51.927681 2403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-7b76m" podStartSLOduration=224.927640663 podCreationTimestamp="2024-04-12 18:31:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:31:28.5520906 +0000 UTC m=+35.397931703" watchObservedRunningTime="2024-04-12 18:34:51.927640663 +0000 UTC m=+238.773481766" Apr 12 18:34:51.935433 env[1311]: time="2024-04-12T18:34:51.935388556Z" level=info msg="StopContainer for \"68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144\" with timeout 30 (s)" Apr 12 18:34:51.936273 env[1311]: time="2024-04-12T18:34:51.936230278Z" level=info msg="Stop container \"68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144\" with signal terminated" Apr 12 18:34:51.946115 systemd[1]: run-containerd-runc-k8s.io-6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c-runc.KEIdnR.mount: Deactivated successfully. Apr 12 18:34:51.966185 env[1311]: time="2024-04-12T18:34:51.966106610Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:34:51.972557 env[1311]: time="2024-04-12T18:34:51.972508821Z" level=info msg="StopContainer for \"6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c\" with timeout 1 (s)" Apr 12 18:34:51.973007 env[1311]: time="2024-04-12T18:34:51.972971942Z" level=info msg="Stop container \"6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c\" with signal terminated" Apr 12 18:34:51.974965 systemd[1]: cri-containerd-68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144.scope: Deactivated successfully. Apr 12 18:34:51.983438 systemd-networkd[1458]: lxc_health: Link DOWN Apr 12 18:34:51.983447 systemd-networkd[1458]: lxc_health: Lost carrier Apr 12 18:34:52.010031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144-rootfs.mount: Deactivated successfully. Apr 12 18:34:52.013827 systemd[1]: cri-containerd-6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c.scope: Deactivated successfully. Apr 12 18:34:52.014198 systemd[1]: cri-containerd-6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c.scope: Consumed 7.213s CPU time. Apr 12 18:34:52.036224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c-rootfs.mount: Deactivated successfully. Apr 12 18:34:52.069989 env[1311]: time="2024-04-12T18:34:52.069930789Z" level=info msg="shim disconnected" id=68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144 Apr 12 18:34:52.070257 env[1311]: time="2024-04-12T18:34:52.070233310Z" level=warning msg="cleaning up after shim disconnected" id=68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144 namespace=k8s.io Apr 12 18:34:52.070369 env[1311]: time="2024-04-12T18:34:52.070353430Z" level=info msg="cleaning up dead shim" Apr 12 18:34:52.070685 env[1311]: time="2024-04-12T18:34:52.069934869Z" level=info msg="shim disconnected" id=6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c Apr 12 18:34:52.070805 env[1311]: time="2024-04-12T18:34:52.070780151Z" level=warning msg="cleaning up after shim disconnected" id=6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c namespace=k8s.io Apr 12 18:34:52.070865 env[1311]: time="2024-04-12T18:34:52.070852391Z" level=info msg="cleaning up dead shim" Apr 12 18:34:52.082275 env[1311]: time="2024-04-12T18:34:52.082226930Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:34:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4018 runtime=io.containerd.runc.v2\n" Apr 12 18:34:52.086473 env[1311]: time="2024-04-12T18:34:52.086410418Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:34:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4019 runtime=io.containerd.runc.v2\n" Apr 12 18:34:52.087027 env[1311]: time="2024-04-12T18:34:52.086988779Z" level=info msg="StopContainer for \"68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144\" returns successfully" Apr 12 18:34:52.087940 env[1311]: time="2024-04-12T18:34:52.087909900Z" level=info msg="StopPodSandbox for \"979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071\"" Apr 12 18:34:52.089984 env[1311]: time="2024-04-12T18:34:52.088050581Z" level=info msg="Container to stop \"68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:34:52.091146 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071-shm.mount: Deactivated successfully. Apr 12 18:34:52.092416 env[1311]: time="2024-04-12T18:34:52.092366228Z" level=info msg="StopContainer for \"6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c\" returns successfully" Apr 12 18:34:52.094036 env[1311]: time="2024-04-12T18:34:52.093997751Z" level=info msg="StopPodSandbox for \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\"" Apr 12 18:34:52.094282 env[1311]: time="2024-04-12T18:34:52.094072311Z" level=info msg="Container to stop \"4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:34:52.094282 env[1311]: time="2024-04-12T18:34:52.094086511Z" level=info msg="Container to stop \"696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:34:52.094282 env[1311]: time="2024-04-12T18:34:52.094105471Z" level=info msg="Container to stop \"6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:34:52.094282 env[1311]: time="2024-04-12T18:34:52.094117911Z" level=info msg="Container to stop \"47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:34:52.094282 env[1311]: time="2024-04-12T18:34:52.094128311Z" level=info msg="Container to stop \"bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:34:52.099945 systemd[1]: cri-containerd-62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df.scope: Deactivated successfully. Apr 12 18:34:52.110143 systemd[1]: cri-containerd-979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071.scope: Deactivated successfully. Apr 12 18:34:52.136144 env[1311]: time="2024-04-12T18:34:52.136090783Z" level=info msg="shim disconnected" id=62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df Apr 12 18:34:52.136144 env[1311]: time="2024-04-12T18:34:52.136140104Z" level=warning msg="cleaning up after shim disconnected" id=62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df namespace=k8s.io Apr 12 18:34:52.136144 env[1311]: time="2024-04-12T18:34:52.136149704Z" level=info msg="cleaning up dead shim" Apr 12 18:34:52.137346 env[1311]: time="2024-04-12T18:34:52.137301626Z" level=info msg="shim disconnected" id=979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071 Apr 12 18:34:52.138078 env[1311]: time="2024-04-12T18:34:52.138045707Z" level=warning msg="cleaning up after shim disconnected" id=979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071 namespace=k8s.io Apr 12 18:34:52.138272 env[1311]: time="2024-04-12T18:34:52.138238387Z" level=info msg="cleaning up dead shim" Apr 12 18:34:52.145224 env[1311]: time="2024-04-12T18:34:52.145157799Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:34:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4081 runtime=io.containerd.runc.v2\n" Apr 12 18:34:52.145531 env[1311]: time="2024-04-12T18:34:52.145500720Z" level=info msg="TearDown network for sandbox \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\" successfully" Apr 12 18:34:52.145616 env[1311]: time="2024-04-12T18:34:52.145529720Z" level=info msg="StopPodSandbox for \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\" returns successfully" Apr 12 18:34:52.168038 env[1311]: time="2024-04-12T18:34:52.167993199Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:34:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4086 runtime=io.containerd.runc.v2\n" Apr 12 18:34:52.168555 env[1311]: time="2024-04-12T18:34:52.168525159Z" level=info msg="TearDown network for sandbox \"979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071\" successfully" Apr 12 18:34:52.168707 env[1311]: time="2024-04-12T18:34:52.168687560Z" level=info msg="StopPodSandbox for \"979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071\" returns successfully" Apr 12 18:34:52.199035 kubelet[2403]: I0412 18:34:52.198913 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-xtables-lock\") pod \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " Apr 12 18:34:52.199035 kubelet[2403]: I0412 18:34:52.198958 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-lib-modules\") pod \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " Apr 12 18:34:52.199035 kubelet[2403]: I0412 18:34:52.198985 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-cilium-config-path\") pod \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " Apr 12 18:34:52.199035 kubelet[2403]: I0412 18:34:52.199003 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-host-proc-sys-net\") pod \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " Apr 12 18:34:52.199035 kubelet[2403]: I0412 18:34:52.199039 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7wzg\" (UniqueName: \"kubernetes.io/projected/59397919-f41f-4edf-be3d-740847a35d37-kube-api-access-d7wzg\") pod \"59397919-f41f-4edf-be3d-740847a35d37\" (UID: \"59397919-f41f-4edf-be3d-740847a35d37\") " Apr 12 18:34:52.199290 kubelet[2403]: I0412 18:34:52.199060 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-bpf-maps\") pod \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " Apr 12 18:34:52.199290 kubelet[2403]: I0412 18:34:52.199083 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvcdc\" (UniqueName: \"kubernetes.io/projected/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-kube-api-access-cvcdc\") pod \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " Apr 12 18:34:52.199290 kubelet[2403]: I0412 18:34:52.199101 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-hostproc\") pod \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " Apr 12 18:34:52.199290 kubelet[2403]: I0412 18:34:52.199121 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-host-proc-sys-kernel\") pod \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " Apr 12 18:34:52.199290 kubelet[2403]: I0412 18:34:52.199138 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-cilium-run\") pod \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " Apr 12 18:34:52.199290 kubelet[2403]: I0412 18:34:52.199158 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-clustermesh-secrets\") pod \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " Apr 12 18:34:52.199426 kubelet[2403]: I0412 18:34:52.199175 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-cni-path\") pod \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " Apr 12 18:34:52.199426 kubelet[2403]: I0412 18:34:52.199196 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59397919-f41f-4edf-be3d-740847a35d37-cilium-config-path\") pod \"59397919-f41f-4edf-be3d-740847a35d37\" (UID: \"59397919-f41f-4edf-be3d-740847a35d37\") " Apr 12 18:34:52.199426 kubelet[2403]: I0412 18:34:52.199216 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-hubble-tls\") pod \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " Apr 12 18:34:52.199426 kubelet[2403]: I0412 18:34:52.199232 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-cilium-cgroup\") pod \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " Apr 12 18:34:52.199426 kubelet[2403]: I0412 18:34:52.199252 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-etc-cni-netd\") pod \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\" (UID: \"f27e7037-5dda-40cf-a2ab-5d00492f2bb2\") " Apr 12 18:34:52.199426 kubelet[2403]: I0412 18:34:52.199309 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f27e7037-5dda-40cf-a2ab-5d00492f2bb2" (UID: "f27e7037-5dda-40cf-a2ab-5d00492f2bb2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:52.199561 kubelet[2403]: I0412 18:34:52.199343 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f27e7037-5dda-40cf-a2ab-5d00492f2bb2" (UID: "f27e7037-5dda-40cf-a2ab-5d00492f2bb2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:52.199561 kubelet[2403]: W0412 18:34:52.199504 2403 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/f27e7037-5dda-40cf-a2ab-5d00492f2bb2/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Apr 12 18:34:52.199718 kubelet[2403]: I0412 18:34:52.199689 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f27e7037-5dda-40cf-a2ab-5d00492f2bb2" (UID: "f27e7037-5dda-40cf-a2ab-5d00492f2bb2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:52.199822 kubelet[2403]: I0412 18:34:52.199805 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f27e7037-5dda-40cf-a2ab-5d00492f2bb2" (UID: "f27e7037-5dda-40cf-a2ab-5d00492f2bb2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:52.199912 kubelet[2403]: I0412 18:34:52.199900 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f27e7037-5dda-40cf-a2ab-5d00492f2bb2" (UID: "f27e7037-5dda-40cf-a2ab-5d00492f2bb2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:52.200648 kubelet[2403]: I0412 18:34:52.200619 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f27e7037-5dda-40cf-a2ab-5d00492f2bb2" (UID: "f27e7037-5dda-40cf-a2ab-5d00492f2bb2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:52.202598 kubelet[2403]: I0412 18:34:52.202542 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f27e7037-5dda-40cf-a2ab-5d00492f2bb2" (UID: "f27e7037-5dda-40cf-a2ab-5d00492f2bb2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:52.202906 kubelet[2403]: I0412 18:34:52.202880 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-cni-path" (OuterVolumeSpecName: "cni-path") pod "f27e7037-5dda-40cf-a2ab-5d00492f2bb2" (UID: "f27e7037-5dda-40cf-a2ab-5d00492f2bb2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:52.203036 kubelet[2403]: W0412 18:34:52.203002 2403 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/59397919-f41f-4edf-be3d-740847a35d37/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Apr 12 18:34:52.205530 kubelet[2403]: I0412 18:34:52.205491 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f27e7037-5dda-40cf-a2ab-5d00492f2bb2" (UID: "f27e7037-5dda-40cf-a2ab-5d00492f2bb2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:34:52.206939 kubelet[2403]: I0412 18:34:52.206894 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-hostproc" (OuterVolumeSpecName: "hostproc") pod "f27e7037-5dda-40cf-a2ab-5d00492f2bb2" (UID: "f27e7037-5dda-40cf-a2ab-5d00492f2bb2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:52.207131 kubelet[2403]: I0412 18:34:52.207111 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f27e7037-5dda-40cf-a2ab-5d00492f2bb2" (UID: "f27e7037-5dda-40cf-a2ab-5d00492f2bb2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:52.207439 kubelet[2403]: I0412 18:34:52.207417 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59397919-f41f-4edf-be3d-740847a35d37-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "59397919-f41f-4edf-be3d-740847a35d37" (UID: "59397919-f41f-4edf-be3d-740847a35d37"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:34:52.207688 kubelet[2403]: I0412 18:34:52.207667 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59397919-f41f-4edf-be3d-740847a35d37-kube-api-access-d7wzg" (OuterVolumeSpecName: "kube-api-access-d7wzg") pod "59397919-f41f-4edf-be3d-740847a35d37" (UID: "59397919-f41f-4edf-be3d-740847a35d37"). InnerVolumeSpecName "kube-api-access-d7wzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:34:52.208428 kubelet[2403]: I0412 18:34:52.208400 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-kube-api-access-cvcdc" (OuterVolumeSpecName: "kube-api-access-cvcdc") pod "f27e7037-5dda-40cf-a2ab-5d00492f2bb2" (UID: "f27e7037-5dda-40cf-a2ab-5d00492f2bb2"). InnerVolumeSpecName "kube-api-access-cvcdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:34:52.210273 kubelet[2403]: I0412 18:34:52.210239 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f27e7037-5dda-40cf-a2ab-5d00492f2bb2" (UID: "f27e7037-5dda-40cf-a2ab-5d00492f2bb2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:34:52.210505 kubelet[2403]: I0412 18:34:52.210487 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f27e7037-5dda-40cf-a2ab-5d00492f2bb2" (UID: "f27e7037-5dda-40cf-a2ab-5d00492f2bb2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:34:52.300262 kubelet[2403]: I0412 18:34:52.300204 2403 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-hostproc\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:52.300262 kubelet[2403]: I0412 18:34:52.300259 2403 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-bpf-maps\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:52.300262 kubelet[2403]: I0412 18:34:52.300272 2403 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cvcdc\" (UniqueName: \"kubernetes.io/projected/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-kube-api-access-cvcdc\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:52.300504 kubelet[2403]: I0412 18:34:52.300284 2403 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-host-proc-sys-kernel\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:52.300504 kubelet[2403]: I0412 18:34:52.300296 2403 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-cilium-run\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:52.300504 kubelet[2403]: I0412 18:34:52.300308 2403 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-clustermesh-secrets\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:52.300504 kubelet[2403]: I0412 18:34:52.300317 2403 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-cni-path\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:52.300504 kubelet[2403]: I0412 18:34:52.300328 2403 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59397919-f41f-4edf-be3d-740847a35d37-cilium-config-path\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:52.300504 kubelet[2403]: I0412 18:34:52.300337 2403 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-hubble-tls\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:52.300504 kubelet[2403]: I0412 18:34:52.300347 2403 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-cilium-cgroup\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:52.300504 kubelet[2403]: I0412 18:34:52.300356 2403 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-etc-cni-netd\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:52.300727 kubelet[2403]: I0412 18:34:52.300366 2403 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-xtables-lock\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:52.300727 kubelet[2403]: I0412 18:34:52.300376 2403 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-lib-modules\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:52.300727 kubelet[2403]: I0412 18:34:52.300386 2403 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-d7wzg\" (UniqueName: \"kubernetes.io/projected/59397919-f41f-4edf-be3d-740847a35d37-kube-api-access-d7wzg\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:52.300727 kubelet[2403]: I0412 18:34:52.300399 2403 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-cilium-config-path\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:52.300727 kubelet[2403]: I0412 18:34:52.300410 2403 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f27e7037-5dda-40cf-a2ab-5d00492f2bb2-host-proc-sys-net\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:52.903454 kubelet[2403]: I0412 18:34:52.903425 2403 scope.go:115] "RemoveContainer" containerID="68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144" Apr 12 18:34:52.906888 env[1311]: time="2024-04-12T18:34:52.906502794Z" level=info msg="RemoveContainer for \"68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144\"" Apr 12 18:34:52.907202 systemd[1]: Removed slice kubepods-besteffort-pod59397919_f41f_4edf_be3d_740847a35d37.slice. Apr 12 18:34:52.918730 systemd[1]: Removed slice kubepods-burstable-podf27e7037_5dda_40cf_a2ab_5d00492f2bb2.slice. Apr 12 18:34:52.918822 systemd[1]: kubepods-burstable-podf27e7037_5dda_40cf_a2ab_5d00492f2bb2.slice: Consumed 7.313s CPU time. Apr 12 18:34:52.921230 env[1311]: time="2024-04-12T18:34:52.921089099Z" level=info msg="RemoveContainer for \"68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144\" returns successfully" Apr 12 18:34:52.921418 kubelet[2403]: I0412 18:34:52.921384 2403 scope.go:115] "RemoveContainer" containerID="68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144" Apr 12 18:34:52.921741 env[1311]: time="2024-04-12T18:34:52.921662460Z" level=error msg="ContainerStatus for \"68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144\": not found" Apr 12 18:34:52.922073 kubelet[2403]: E0412 18:34:52.921873 2403 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144\": not found" containerID="68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144" Apr 12 18:34:52.922073 kubelet[2403]: I0412 18:34:52.921915 2403 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144} err="failed to get container status \"68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144\": rpc error: code = NotFound desc = an error occurred when try to find container \"68e745bd9f23cbf7da8f6c798466d8dc73d7b07f1282ccd0c75a7bcc0329a144\": not found" Apr 12 18:34:52.922073 kubelet[2403]: I0412 18:34:52.921928 2403 scope.go:115] "RemoveContainer" containerID="6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c" Apr 12 18:34:52.924923 env[1311]: time="2024-04-12T18:34:52.924638745Z" level=info msg="RemoveContainer for \"6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c\"" Apr 12 18:34:52.934082 env[1311]: time="2024-04-12T18:34:52.933183000Z" level=info msg="RemoveContainer for \"6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c\" returns successfully" Apr 12 18:34:52.934563 kubelet[2403]: I0412 18:34:52.934542 2403 scope.go:115] "RemoveContainer" containerID="696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8" Apr 12 18:34:52.936339 env[1311]: time="2024-04-12T18:34:52.936303165Z" level=info msg="RemoveContainer for \"696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8\"" Apr 12 18:34:52.942167 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071-rootfs.mount: Deactivated successfully. Apr 12 18:34:52.942273 systemd[1]: var-lib-kubelet-pods-59397919\x2df41f\x2d4edf\x2dbe3d\x2d740847a35d37-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd7wzg.mount: Deactivated successfully. Apr 12 18:34:52.942334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df-rootfs.mount: Deactivated successfully. Apr 12 18:34:52.942384 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df-shm.mount: Deactivated successfully. Apr 12 18:34:52.942435 systemd[1]: var-lib-kubelet-pods-f27e7037\x2d5dda\x2d40cf\x2da2ab\x2d5d00492f2bb2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcvcdc.mount: Deactivated successfully. Apr 12 18:34:52.942486 systemd[1]: var-lib-kubelet-pods-f27e7037\x2d5dda\x2d40cf\x2da2ab\x2d5d00492f2bb2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:34:52.942536 systemd[1]: var-lib-kubelet-pods-f27e7037\x2d5dda\x2d40cf\x2da2ab\x2d5d00492f2bb2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:34:52.947748 env[1311]: time="2024-04-12T18:34:52.947690825Z" level=info msg="RemoveContainer for \"696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8\" returns successfully" Apr 12 18:34:52.948000 kubelet[2403]: I0412 18:34:52.947968 2403 scope.go:115] "RemoveContainer" containerID="4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8" Apr 12 18:34:52.949275 env[1311]: time="2024-04-12T18:34:52.949238748Z" level=info msg="RemoveContainer for \"4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8\"" Apr 12 18:34:52.956546 env[1311]: time="2024-04-12T18:34:52.956502280Z" level=info msg="RemoveContainer for \"4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8\" returns successfully" Apr 12 18:34:52.956963 kubelet[2403]: I0412 18:34:52.956926 2403 scope.go:115] "RemoveContainer" containerID="bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42" Apr 12 18:34:52.958374 env[1311]: time="2024-04-12T18:34:52.958328803Z" level=info msg="RemoveContainer for \"bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42\"" Apr 12 18:34:52.968377 env[1311]: time="2024-04-12T18:34:52.968325100Z" level=info msg="RemoveContainer for \"bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42\" returns successfully" Apr 12 18:34:52.968652 kubelet[2403]: I0412 18:34:52.968620 2403 scope.go:115] "RemoveContainer" containerID="47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa" Apr 12 18:34:52.970130 env[1311]: time="2024-04-12T18:34:52.970074583Z" level=info msg="RemoveContainer for \"47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa\"" Apr 12 18:34:52.977529 env[1311]: time="2024-04-12T18:34:52.977458796Z" level=info msg="RemoveContainer for \"47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa\" returns successfully" Apr 12 18:34:52.978031 kubelet[2403]: I0412 18:34:52.977988 2403 scope.go:115] "RemoveContainer" containerID="6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c" Apr 12 18:34:52.978361 env[1311]: time="2024-04-12T18:34:52.978294318Z" level=error msg="ContainerStatus for \"6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c\": not found" Apr 12 18:34:52.978684 kubelet[2403]: E0412 18:34:52.978664 2403 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c\": not found" containerID="6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c" Apr 12 18:34:52.978841 kubelet[2403]: I0412 18:34:52.978829 2403 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c} err="failed to get container status \"6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b1a81ddd2cb329b5e687f6eaa299eb267244c3fa9f506cdc6df5fb471ebe11c\": not found" Apr 12 18:34:52.978910 kubelet[2403]: I0412 18:34:52.978901 2403 scope.go:115] "RemoveContainer" containerID="696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8" Apr 12 18:34:52.979222 env[1311]: time="2024-04-12T18:34:52.979159879Z" level=error msg="ContainerStatus for \"696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8\": not found" Apr 12 18:34:52.979403 kubelet[2403]: E0412 18:34:52.979378 2403 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8\": not found" containerID="696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8" Apr 12 18:34:52.979450 kubelet[2403]: I0412 18:34:52.979418 2403 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8} err="failed to get container status \"696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"696a658d6449c13b1075d9c81907d4bad2b4c4e93ada79cb41dbd3c0781db1d8\": not found" Apr 12 18:34:52.979450 kubelet[2403]: I0412 18:34:52.979430 2403 scope.go:115] "RemoveContainer" containerID="4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8" Apr 12 18:34:52.979838 env[1311]: time="2024-04-12T18:34:52.979784000Z" level=error msg="ContainerStatus for \"4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8\": not found" Apr 12 18:34:52.980102 kubelet[2403]: E0412 18:34:52.980081 2403 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8\": not found" containerID="4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8" Apr 12 18:34:52.980162 kubelet[2403]: I0412 18:34:52.980112 2403 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8} err="failed to get container status \"4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d277363e1cc9ef35472c7f03a48d7764872fca4fe65e9043071de87a74e31c8\": not found" Apr 12 18:34:52.980162 kubelet[2403]: I0412 18:34:52.980124 2403 scope.go:115] "RemoveContainer" containerID="bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42" Apr 12 18:34:52.980372 env[1311]: time="2024-04-12T18:34:52.980314801Z" level=error msg="ContainerStatus for \"bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42\": not found" Apr 12 18:34:52.980531 kubelet[2403]: E0412 18:34:52.980515 2403 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42\": not found" containerID="bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42" Apr 12 18:34:52.980703 kubelet[2403]: I0412 18:34:52.980689 2403 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42} err="failed to get container status \"bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd5ef7ce2a429166cb741e52f0a51b16e2b10f378c182a7a5d52cd7e84e10e42\": not found" Apr 12 18:34:52.980788 kubelet[2403]: I0412 18:34:52.980777 2403 scope.go:115] "RemoveContainer" containerID="47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa" Apr 12 18:34:52.981095 env[1311]: time="2024-04-12T18:34:52.981039842Z" level=error msg="ContainerStatus for \"47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa\": not found" Apr 12 18:34:52.981347 kubelet[2403]: E0412 18:34:52.981329 2403 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa\": not found" containerID="47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa" Apr 12 18:34:52.981489 kubelet[2403]: I0412 18:34:52.981476 2403 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa} err="failed to get container status \"47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"47f25b62cd47e53c41678a906884ff864ee88c248294f22b309e134e355b48aa\": not found" Apr 12 18:34:53.309265 kubelet[2403]: I0412 18:34:53.309234 2403 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=59397919-f41f-4edf-be3d-740847a35d37 path="/var/lib/kubelet/pods/59397919-f41f-4edf-be3d-740847a35d37/volumes" Apr 12 18:34:53.309924 kubelet[2403]: I0412 18:34:53.309890 2403 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=f27e7037-5dda-40cf-a2ab-5d00492f2bb2 path="/var/lib/kubelet/pods/f27e7037-5dda-40cf-a2ab-5d00492f2bb2/volumes" Apr 12 18:34:53.353862 env[1311]: time="2024-04-12T18:34:53.353651322Z" level=info msg="StopPodSandbox for \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\"" Apr 12 18:34:53.353862 env[1311]: time="2024-04-12T18:34:53.353749522Z" level=info msg="TearDown network for sandbox \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\" successfully" Apr 12 18:34:53.353862 env[1311]: time="2024-04-12T18:34:53.353782842Z" level=info msg="StopPodSandbox for \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\" returns successfully" Apr 12 18:34:53.354560 env[1311]: time="2024-04-12T18:34:53.354271683Z" level=info msg="RemovePodSandbox for \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\"" Apr 12 18:34:53.354560 env[1311]: time="2024-04-12T18:34:53.354303843Z" level=info msg="Forcibly stopping sandbox \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\"" Apr 12 18:34:53.354560 env[1311]: time="2024-04-12T18:34:53.354370563Z" level=info msg="TearDown network for sandbox \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\" successfully" Apr 12 18:34:53.361585 env[1311]: time="2024-04-12T18:34:53.361528096Z" level=info msg="RemovePodSandbox \"62b048adc1f839b4bf3248abf36bb41f27c90ad235a632e1e379569b5ae799df\" returns successfully" Apr 12 18:34:53.362326 env[1311]: time="2024-04-12T18:34:53.362140297Z" level=info msg="StopPodSandbox for \"979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071\"" Apr 12 18:34:53.362326 env[1311]: time="2024-04-12T18:34:53.362226457Z" level=info msg="TearDown network for sandbox \"979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071\" successfully" Apr 12 18:34:53.362326 env[1311]: time="2024-04-12T18:34:53.362257137Z" level=info msg="StopPodSandbox for \"979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071\" returns successfully" Apr 12 18:34:53.363984 env[1311]: time="2024-04-12T18:34:53.362874778Z" level=info msg="RemovePodSandbox for \"979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071\"" Apr 12 18:34:53.363984 env[1311]: time="2024-04-12T18:34:53.362905938Z" level=info msg="Forcibly stopping sandbox \"979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071\"" Apr 12 18:34:53.363984 env[1311]: time="2024-04-12T18:34:53.362974858Z" level=info msg="TearDown network for sandbox \"979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071\" successfully" Apr 12 18:34:53.370622 env[1311]: time="2024-04-12T18:34:53.370566471Z" level=info msg="RemovePodSandbox \"979cb5d547b66811dfc4c9125f537419ebfcfcb441c7a53fc1e0caba6d71c071\" returns successfully" Apr 12 18:34:53.550212 kubelet[2403]: E0412 18:34:53.550165 2403 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:34:53.947444 sshd[3949]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:53.950909 systemd[1]: sshd@20-10.200.20.17:22-10.200.12.6:54136.service: Deactivated successfully. Apr 12 18:34:53.951817 systemd[1]: session-23.scope: Deactivated successfully. Apr 12 18:34:53.952032 systemd[1]: session-23.scope: Consumed 2.036s CPU time. Apr 12 18:34:53.952506 systemd-logind[1298]: Session 23 logged out. Waiting for processes to exit. Apr 12 18:34:53.954133 systemd-logind[1298]: Removed session 23. Apr 12 18:34:54.017911 systemd[1]: Started sshd@21-10.200.20.17:22-10.200.12.6:54140.service. Apr 12 18:34:54.431969 sshd[4115]: Accepted publickey for core from 10.200.12.6 port 54140 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:54.433407 sshd[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:54.437632 systemd-logind[1298]: New session 24 of user core. Apr 12 18:34:54.438376 systemd[1]: Started session-24.scope. Apr 12 18:34:56.130461 kubelet[2403]: I0412 18:34:56.130396 2403 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:34:56.130883 kubelet[2403]: E0412 18:34:56.130480 2403 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f27e7037-5dda-40cf-a2ab-5d00492f2bb2" containerName="mount-cgroup" Apr 12 18:34:56.130883 kubelet[2403]: E0412 18:34:56.130491 2403 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="59397919-f41f-4edf-be3d-740847a35d37" containerName="cilium-operator" Apr 12 18:34:56.130883 kubelet[2403]: E0412 18:34:56.130498 2403 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f27e7037-5dda-40cf-a2ab-5d00492f2bb2" containerName="cilium-agent" Apr 12 18:34:56.130883 kubelet[2403]: E0412 18:34:56.130506 2403 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f27e7037-5dda-40cf-a2ab-5d00492f2bb2" containerName="apply-sysctl-overwrites" Apr 12 18:34:56.130883 kubelet[2403]: E0412 18:34:56.130513 2403 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f27e7037-5dda-40cf-a2ab-5d00492f2bb2" containerName="mount-bpf-fs" Apr 12 18:34:56.130883 kubelet[2403]: E0412 18:34:56.130520 2403 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f27e7037-5dda-40cf-a2ab-5d00492f2bb2" containerName="clean-cilium-state" Apr 12 18:34:56.130883 kubelet[2403]: I0412 18:34:56.130542 2403 memory_manager.go:346] "RemoveStaleState removing state" podUID="f27e7037-5dda-40cf-a2ab-5d00492f2bb2" containerName="cilium-agent" Apr 12 18:34:56.130883 kubelet[2403]: I0412 18:34:56.130550 2403 memory_manager.go:346] "RemoveStaleState removing state" podUID="59397919-f41f-4edf-be3d-740847a35d37" containerName="cilium-operator" Apr 12 18:34:56.136352 systemd[1]: Created slice kubepods-burstable-pod41ed7409_48dd_4fae_bc29_2b97ca7cb503.slice. Apr 12 18:34:56.164706 sshd[4115]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:56.168109 systemd-logind[1298]: Session 24 logged out. Waiting for processes to exit. Apr 12 18:34:56.168290 systemd[1]: sshd@21-10.200.20.17:22-10.200.12.6:54140.service: Deactivated successfully. Apr 12 18:34:56.169050 systemd[1]: session-24.scope: Deactivated successfully. Apr 12 18:34:56.169245 systemd[1]: session-24.scope: Consumed 1.371s CPU time. Apr 12 18:34:56.170191 systemd-logind[1298]: Removed session 24. Apr 12 18:34:56.225262 kubelet[2403]: I0412 18:34:56.225195 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cilium-config-path\") pod \"cilium-p7smb\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " pod="kube-system/cilium-p7smb" Apr 12 18:34:56.225262 kubelet[2403]: I0412 18:34:56.225250 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw7d7\" (UniqueName: \"kubernetes.io/projected/41ed7409-48dd-4fae-bc29-2b97ca7cb503-kube-api-access-fw7d7\") pod \"cilium-p7smb\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " pod="kube-system/cilium-p7smb" Apr 12 18:34:56.225262 kubelet[2403]: I0412 18:34:56.225272 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cni-path\") pod \"cilium-p7smb\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " pod="kube-system/cilium-p7smb" Apr 12 18:34:56.225486 kubelet[2403]: I0412 18:34:56.225295 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-xtables-lock\") pod \"cilium-p7smb\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " pod="kube-system/cilium-p7smb" Apr 12 18:34:56.225486 kubelet[2403]: I0412 18:34:56.225316 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-hostproc\") pod \"cilium-p7smb\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " pod="kube-system/cilium-p7smb" Apr 12 18:34:56.225486 kubelet[2403]: I0412 18:34:56.225335 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cilium-run\") pod \"cilium-p7smb\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " pod="kube-system/cilium-p7smb" Apr 12 18:34:56.225486 kubelet[2403]: I0412 18:34:56.225353 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cilium-cgroup\") pod \"cilium-p7smb\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " pod="kube-system/cilium-p7smb" Apr 12 18:34:56.225486 kubelet[2403]: I0412 18:34:56.225372 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41ed7409-48dd-4fae-bc29-2b97ca7cb503-clustermesh-secrets\") pod \"cilium-p7smb\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " pod="kube-system/cilium-p7smb" Apr 12 18:34:56.225486 kubelet[2403]: I0412 18:34:56.225392 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-etc-cni-netd\") pod \"cilium-p7smb\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " pod="kube-system/cilium-p7smb" Apr 12 18:34:56.225661 kubelet[2403]: I0412 18:34:56.225411 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cilium-ipsec-secrets\") pod \"cilium-p7smb\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " pod="kube-system/cilium-p7smb" Apr 12 18:34:56.225661 kubelet[2403]: I0412 18:34:56.225430 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-host-proc-sys-net\") pod \"cilium-p7smb\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " pod="kube-system/cilium-p7smb" Apr 12 18:34:56.225661 kubelet[2403]: I0412 18:34:56.225449 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-host-proc-sys-kernel\") pod \"cilium-p7smb\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " pod="kube-system/cilium-p7smb" Apr 12 18:34:56.225661 kubelet[2403]: I0412 18:34:56.225471 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-bpf-maps\") pod \"cilium-p7smb\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " pod="kube-system/cilium-p7smb" Apr 12 18:34:56.225661 kubelet[2403]: I0412 18:34:56.225491 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-lib-modules\") pod \"cilium-p7smb\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " pod="kube-system/cilium-p7smb" Apr 12 18:34:56.225661 kubelet[2403]: I0412 18:34:56.225509 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41ed7409-48dd-4fae-bc29-2b97ca7cb503-hubble-tls\") pod \"cilium-p7smb\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " pod="kube-system/cilium-p7smb" Apr 12 18:34:56.234272 systemd[1]: Started sshd@22-10.200.20.17:22-10.200.12.6:52594.service. Apr 12 18:34:56.441186 env[1311]: time="2024-04-12T18:34:56.440436174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p7smb,Uid:41ed7409-48dd-4fae-bc29-2b97ca7cb503,Namespace:kube-system,Attempt:0,}" Apr 12 18:34:56.490754 env[1311]: time="2024-04-12T18:34:56.490678219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:34:56.490971 env[1311]: time="2024-04-12T18:34:56.490931419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:34:56.491100 env[1311]: time="2024-04-12T18:34:56.491049299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:34:56.491466 env[1311]: time="2024-04-12T18:34:56.491412980Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/749e61a187d1918fd9ef303c72651a723b2274d3c6afde4385c5740b90f995b3 pid=4139 runtime=io.containerd.runc.v2 Apr 12 18:34:56.502327 systemd[1]: Started cri-containerd-749e61a187d1918fd9ef303c72651a723b2274d3c6afde4385c5740b90f995b3.scope. Apr 12 18:34:56.537064 env[1311]: time="2024-04-12T18:34:56.537010097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p7smb,Uid:41ed7409-48dd-4fae-bc29-2b97ca7cb503,Namespace:kube-system,Attempt:0,} returns sandbox id \"749e61a187d1918fd9ef303c72651a723b2274d3c6afde4385c5740b90f995b3\"" Apr 12 18:34:56.544497 env[1311]: time="2024-04-12T18:34:56.544115829Z" level=info msg="CreateContainer within sandbox \"749e61a187d1918fd9ef303c72651a723b2274d3c6afde4385c5740b90f995b3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:34:56.576517 env[1311]: time="2024-04-12T18:34:56.576443043Z" level=info msg="CreateContainer within sandbox \"749e61a187d1918fd9ef303c72651a723b2274d3c6afde4385c5740b90f995b3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"09db806a39d01d885edcdec3e7b37de7d5b04289e5893efba25137c72315b912\"" Apr 12 18:34:56.578750 env[1311]: time="2024-04-12T18:34:56.577201885Z" level=info msg="StartContainer for \"09db806a39d01d885edcdec3e7b37de7d5b04289e5893efba25137c72315b912\"" Apr 12 18:34:56.593097 systemd[1]: Started cri-containerd-09db806a39d01d885edcdec3e7b37de7d5b04289e5893efba25137c72315b912.scope. Apr 12 18:34:56.604272 systemd[1]: cri-containerd-09db806a39d01d885edcdec3e7b37de7d5b04289e5893efba25137c72315b912.scope: Deactivated successfully. Apr 12 18:34:56.644027 sshd[4126]: Accepted publickey for core from 10.200.12.6 port 52594 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:56.645639 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:56.650670 systemd[1]: Started session-25.scope. Apr 12 18:34:56.651704 systemd-logind[1298]: New session 25 of user core. Apr 12 18:34:56.668692 env[1311]: time="2024-04-12T18:34:56.668535598Z" level=info msg="shim disconnected" id=09db806a39d01d885edcdec3e7b37de7d5b04289e5893efba25137c72315b912 Apr 12 18:34:56.668892 env[1311]: time="2024-04-12T18:34:56.668699599Z" level=warning msg="cleaning up after shim disconnected" id=09db806a39d01d885edcdec3e7b37de7d5b04289e5893efba25137c72315b912 namespace=k8s.io Apr 12 18:34:56.668892 env[1311]: time="2024-04-12T18:34:56.668713159Z" level=info msg="cleaning up dead shim" Apr 12 18:34:56.676612 env[1311]: time="2024-04-12T18:34:56.676538012Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:34:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4200 runtime=io.containerd.runc.v2\ntime=\"2024-04-12T18:34:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/09db806a39d01d885edcdec3e7b37de7d5b04289e5893efba25137c72315b912/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Apr 12 18:34:56.676937 env[1311]: time="2024-04-12T18:34:56.676830532Z" level=error msg="copy shim log" error="read /proc/self/fd/27: file already closed" Apr 12 18:34:56.678669 env[1311]: time="2024-04-12T18:34:56.678625055Z" level=error msg="Failed to pipe stderr of container \"09db806a39d01d885edcdec3e7b37de7d5b04289e5893efba25137c72315b912\"" error="reading from a closed fifo" Apr 12 18:34:56.678869 env[1311]: time="2024-04-12T18:34:56.678829056Z" level=error msg="Failed to pipe stdout of container \"09db806a39d01d885edcdec3e7b37de7d5b04289e5893efba25137c72315b912\"" error="reading from a closed fifo" Apr 12 18:34:56.683227 env[1311]: time="2024-04-12T18:34:56.683158103Z" level=error msg="StartContainer for \"09db806a39d01d885edcdec3e7b37de7d5b04289e5893efba25137c72315b912\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Apr 12 18:34:56.684087 kubelet[2403]: E0412 18:34:56.683659 2403 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="09db806a39d01d885edcdec3e7b37de7d5b04289e5893efba25137c72315b912" Apr 12 18:34:56.684087 kubelet[2403]: E0412 18:34:56.683831 2403 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Apr 12 18:34:56.684087 kubelet[2403]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Apr 12 18:34:56.684087 kubelet[2403]: rm /hostbin/cilium-mount Apr 12 18:34:56.684331 kubelet[2403]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fw7d7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-p7smb_kube-system(41ed7409-48dd-4fae-bc29-2b97ca7cb503): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Apr 12 18:34:56.684417 kubelet[2403]: E0412 18:34:56.683892 2403 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-p7smb" podUID=41ed7409-48dd-4fae-bc29-2b97ca7cb503 Apr 12 18:34:56.923958 env[1311]: time="2024-04-12T18:34:56.923874989Z" level=info msg="CreateContainer within sandbox \"749e61a187d1918fd9ef303c72651a723b2274d3c6afde4385c5740b90f995b3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Apr 12 18:34:56.959458 env[1311]: time="2024-04-12T18:34:56.959397489Z" level=info msg="CreateContainer within sandbox \"749e61a187d1918fd9ef303c72651a723b2274d3c6afde4385c5740b90f995b3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"266aa2e09f6d87c13c314d71f3993dfd16cb443bd6e0aa4c4628838eaeed26bf\"" Apr 12 18:34:56.960461 env[1311]: time="2024-04-12T18:34:56.960423290Z" level=info msg="StartContainer for \"266aa2e09f6d87c13c314d71f3993dfd16cb443bd6e0aa4c4628838eaeed26bf\"" Apr 12 18:34:56.983241 systemd[1]: Started cri-containerd-266aa2e09f6d87c13c314d71f3993dfd16cb443bd6e0aa4c4628838eaeed26bf.scope. Apr 12 18:34:57.012137 systemd[1]: cri-containerd-266aa2e09f6d87c13c314d71f3993dfd16cb443bd6e0aa4c4628838eaeed26bf.scope: Deactivated successfully. Apr 12 18:34:57.033676 env[1311]: time="2024-04-12T18:34:57.033611533Z" level=info msg="shim disconnected" id=266aa2e09f6d87c13c314d71f3993dfd16cb443bd6e0aa4c4628838eaeed26bf Apr 12 18:34:57.033676 env[1311]: time="2024-04-12T18:34:57.033670173Z" level=warning msg="cleaning up after shim disconnected" id=266aa2e09f6d87c13c314d71f3993dfd16cb443bd6e0aa4c4628838eaeed26bf namespace=k8s.io Apr 12 18:34:57.033676 env[1311]: time="2024-04-12T18:34:57.033681133Z" level=info msg="cleaning up dead shim" Apr 12 18:34:57.042336 env[1311]: time="2024-04-12T18:34:57.042274428Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:34:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4245 runtime=io.containerd.runc.v2\ntime=\"2024-04-12T18:34:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/266aa2e09f6d87c13c314d71f3993dfd16cb443bd6e0aa4c4628838eaeed26bf/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Apr 12 18:34:57.042636 env[1311]: time="2024-04-12T18:34:57.042550268Z" level=error msg="copy shim log" error="read /proc/self/fd/27: file already closed" Apr 12 18:34:57.043072 env[1311]: time="2024-04-12T18:34:57.043025389Z" level=error msg="Failed to pipe stderr of container \"266aa2e09f6d87c13c314d71f3993dfd16cb443bd6e0aa4c4628838eaeed26bf\"" error="reading from a closed fifo" Apr 12 18:34:57.043072 env[1311]: time="2024-04-12T18:34:57.043024189Z" level=error msg="Failed to pipe stdout of container \"266aa2e09f6d87c13c314d71f3993dfd16cb443bd6e0aa4c4628838eaeed26bf\"" error="reading from a closed fifo" Apr 12 18:34:57.044620 sshd[4126]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:57.047149 systemd[1]: sshd@22-10.200.20.17:22-10.200.12.6:52594.service: Deactivated successfully. Apr 12 18:34:57.048052 systemd[1]: session-25.scope: Deactivated successfully. Apr 12 18:34:57.049649 systemd-logind[1298]: Session 25 logged out. Waiting for processes to exit. Apr 12 18:34:57.050588 systemd-logind[1298]: Removed session 25. Apr 12 18:34:57.050983 env[1311]: time="2024-04-12T18:34:57.050928842Z" level=error msg="StartContainer for \"266aa2e09f6d87c13c314d71f3993dfd16cb443bd6e0aa4c4628838eaeed26bf\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Apr 12 18:34:57.051345 kubelet[2403]: E0412 18:34:57.051317 2403 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="266aa2e09f6d87c13c314d71f3993dfd16cb443bd6e0aa4c4628838eaeed26bf" Apr 12 18:34:57.051870 kubelet[2403]: E0412 18:34:57.051436 2403 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Apr 12 18:34:57.051870 kubelet[2403]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Apr 12 18:34:57.051870 kubelet[2403]: rm /hostbin/cilium-mount Apr 12 18:34:57.052003 kubelet[2403]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-fw7d7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-p7smb_kube-system(41ed7409-48dd-4fae-bc29-2b97ca7cb503): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Apr 12 18:34:57.052003 kubelet[2403]: E0412 18:34:57.051486 2403 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-p7smb" podUID=41ed7409-48dd-4fae-bc29-2b97ca7cb503 Apr 12 18:34:57.115167 systemd[1]: Started sshd@23-10.200.20.17:22-10.200.12.6:52608.service. Apr 12 18:34:57.524055 sshd[4261]: Accepted publickey for core from 10.200.12.6 port 52608 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:57.525865 sshd[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:57.530966 systemd-logind[1298]: New session 26 of user core. Apr 12 18:34:57.531274 systemd[1]: Started session-26.scope. Apr 12 18:34:57.923987 kubelet[2403]: I0412 18:34:57.923958 2403 scope.go:115] "RemoveContainer" containerID="09db806a39d01d885edcdec3e7b37de7d5b04289e5893efba25137c72315b912" Apr 12 18:34:57.924586 env[1311]: time="2024-04-12T18:34:57.924533466Z" level=info msg="StopPodSandbox for \"749e61a187d1918fd9ef303c72651a723b2274d3c6afde4385c5740b90f995b3\"" Apr 12 18:34:57.928032 env[1311]: time="2024-04-12T18:34:57.927995311Z" level=info msg="Container to stop \"09db806a39d01d885edcdec3e7b37de7d5b04289e5893efba25137c72315b912\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:34:57.928688 env[1311]: time="2024-04-12T18:34:57.928659633Z" level=info msg="Container to stop \"266aa2e09f6d87c13c314d71f3993dfd16cb443bd6e0aa4c4628838eaeed26bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:34:57.930494 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-749e61a187d1918fd9ef303c72651a723b2274d3c6afde4385c5740b90f995b3-shm.mount: Deactivated successfully. Apr 12 18:34:57.934346 env[1311]: time="2024-04-12T18:34:57.934306922Z" level=info msg="RemoveContainer for \"09db806a39d01d885edcdec3e7b37de7d5b04289e5893efba25137c72315b912\"" Apr 12 18:34:57.945721 env[1311]: time="2024-04-12T18:34:57.945666061Z" level=info msg="RemoveContainer for \"09db806a39d01d885edcdec3e7b37de7d5b04289e5893efba25137c72315b912\" returns successfully" Apr 12 18:34:57.948686 systemd[1]: cri-containerd-749e61a187d1918fd9ef303c72651a723b2274d3c6afde4385c5740b90f995b3.scope: Deactivated successfully. Apr 12 18:34:57.984362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-749e61a187d1918fd9ef303c72651a723b2274d3c6afde4385c5740b90f995b3-rootfs.mount: Deactivated successfully. Apr 12 18:34:58.005373 env[1311]: time="2024-04-12T18:34:58.005320521Z" level=info msg="shim disconnected" id=749e61a187d1918fd9ef303c72651a723b2274d3c6afde4385c5740b90f995b3 Apr 12 18:34:58.005675 env[1311]: time="2024-04-12T18:34:58.005653721Z" level=warning msg="cleaning up after shim disconnected" id=749e61a187d1918fd9ef303c72651a723b2274d3c6afde4385c5740b90f995b3 namespace=k8s.io Apr 12 18:34:58.005768 env[1311]: time="2024-04-12T18:34:58.005753922Z" level=info msg="cleaning up dead shim" Apr 12 18:34:58.018726 env[1311]: time="2024-04-12T18:34:58.018673783Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:34:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4286 runtime=io.containerd.runc.v2\n" Apr 12 18:34:58.019219 env[1311]: time="2024-04-12T18:34:58.019187624Z" level=info msg="TearDown network for sandbox \"749e61a187d1918fd9ef303c72651a723b2274d3c6afde4385c5740b90f995b3\" successfully" Apr 12 18:34:58.019337 env[1311]: time="2024-04-12T18:34:58.019319504Z" level=info msg="StopPodSandbox for \"749e61a187d1918fd9ef303c72651a723b2274d3c6afde4385c5740b90f995b3\" returns successfully" Apr 12 18:34:58.038233 kubelet[2403]: I0412 18:34:58.038127 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cilium-run\") pod \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " Apr 12 18:34:58.038508 kubelet[2403]: I0412 18:34:58.038492 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cilium-ipsec-secrets\") pod \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " Apr 12 18:34:58.038724 kubelet[2403]: I0412 18:34:58.038653 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw7d7\" (UniqueName: \"kubernetes.io/projected/41ed7409-48dd-4fae-bc29-2b97ca7cb503-kube-api-access-fw7d7\") pod \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " Apr 12 18:34:58.039252 kubelet[2403]: I0412 18:34:58.039210 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cilium-config-path\") pod \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " Apr 12 18:34:58.039419 kubelet[2403]: I0412 18:34:58.039407 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cni-path\") pod \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " Apr 12 18:34:58.039526 kubelet[2403]: I0412 18:34:58.039516 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-xtables-lock\") pod \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " Apr 12 18:34:58.040622 kubelet[2403]: I0412 18:34:58.040604 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-lib-modules\") pod \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " Apr 12 18:34:58.040764 kubelet[2403]: I0412 18:34:58.040751 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-etc-cni-netd\") pod \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " Apr 12 18:34:58.040864 kubelet[2403]: I0412 18:34:58.040853 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-host-proc-sys-net\") pod \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " Apr 12 18:34:58.040962 kubelet[2403]: I0412 18:34:58.040952 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-hostproc\") pod \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " Apr 12 18:34:58.041052 kubelet[2403]: I0412 18:34:58.041043 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cilium-cgroup\") pod \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " Apr 12 18:34:58.041144 kubelet[2403]: I0412 18:34:58.041135 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41ed7409-48dd-4fae-bc29-2b97ca7cb503-clustermesh-secrets\") pod \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " Apr 12 18:34:58.041249 kubelet[2403]: I0412 18:34:58.041239 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41ed7409-48dd-4fae-bc29-2b97ca7cb503-hubble-tls\") pod \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " Apr 12 18:34:58.041337 kubelet[2403]: I0412 18:34:58.041328 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-bpf-maps\") pod \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " Apr 12 18:34:58.041421 kubelet[2403]: I0412 18:34:58.041412 2403 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-host-proc-sys-kernel\") pod \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\" (UID: \"41ed7409-48dd-4fae-bc29-2b97ca7cb503\") " Apr 12 18:34:58.041553 kubelet[2403]: I0412 18:34:58.041532 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "41ed7409-48dd-4fae-bc29-2b97ca7cb503" (UID: "41ed7409-48dd-4fae-bc29-2b97ca7cb503"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:58.041655 kubelet[2403]: I0412 18:34:58.039646 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "41ed7409-48dd-4fae-bc29-2b97ca7cb503" (UID: "41ed7409-48dd-4fae-bc29-2b97ca7cb503"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:58.041727 kubelet[2403]: I0412 18:34:58.039714 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "41ed7409-48dd-4fae-bc29-2b97ca7cb503" (UID: "41ed7409-48dd-4fae-bc29-2b97ca7cb503"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:58.041797 kubelet[2403]: W0412 18:34:58.039849 2403 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/41ed7409-48dd-4fae-bc29-2b97ca7cb503/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Apr 12 18:34:58.043933 kubelet[2403]: I0412 18:34:58.043898 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "41ed7409-48dd-4fae-bc29-2b97ca7cb503" (UID: "41ed7409-48dd-4fae-bc29-2b97ca7cb503"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:34:58.044208 systemd[1]: var-lib-kubelet-pods-41ed7409\x2d48dd\x2d4fae\x2dbc29\x2d2b97ca7cb503-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfw7d7.mount: Deactivated successfully. Apr 12 18:34:58.044744 kubelet[2403]: I0412 18:34:58.039877 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cni-path" (OuterVolumeSpecName: "cni-path") pod "41ed7409-48dd-4fae-bc29-2b97ca7cb503" (UID: "41ed7409-48dd-4fae-bc29-2b97ca7cb503"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:58.044844 kubelet[2403]: I0412 18:34:58.044076 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "41ed7409-48dd-4fae-bc29-2b97ca7cb503" (UID: "41ed7409-48dd-4fae-bc29-2b97ca7cb503"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:58.045020 kubelet[2403]: I0412 18:34:58.044092 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "41ed7409-48dd-4fae-bc29-2b97ca7cb503" (UID: "41ed7409-48dd-4fae-bc29-2b97ca7cb503"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:58.045110 kubelet[2403]: I0412 18:34:58.044104 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "41ed7409-48dd-4fae-bc29-2b97ca7cb503" (UID: "41ed7409-48dd-4fae-bc29-2b97ca7cb503"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:58.045182 kubelet[2403]: I0412 18:34:58.044115 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-hostproc" (OuterVolumeSpecName: "hostproc") pod "41ed7409-48dd-4fae-bc29-2b97ca7cb503" (UID: "41ed7409-48dd-4fae-bc29-2b97ca7cb503"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:58.045242 kubelet[2403]: I0412 18:34:58.044139 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "41ed7409-48dd-4fae-bc29-2b97ca7cb503" (UID: "41ed7409-48dd-4fae-bc29-2b97ca7cb503"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:58.045314 kubelet[2403]: I0412 18:34:58.044700 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "41ed7409-48dd-4fae-bc29-2b97ca7cb503" (UID: "41ed7409-48dd-4fae-bc29-2b97ca7cb503"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:34:58.045536 kubelet[2403]: I0412 18:34:58.045509 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ed7409-48dd-4fae-bc29-2b97ca7cb503-kube-api-access-fw7d7" (OuterVolumeSpecName: "kube-api-access-fw7d7") pod "41ed7409-48dd-4fae-bc29-2b97ca7cb503" (UID: "41ed7409-48dd-4fae-bc29-2b97ca7cb503"). InnerVolumeSpecName "kube-api-access-fw7d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:34:58.053027 systemd[1]: var-lib-kubelet-pods-41ed7409\x2d48dd\x2d4fae\x2dbc29\x2d2b97ca7cb503-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Apr 12 18:34:58.053133 systemd[1]: var-lib-kubelet-pods-41ed7409\x2d48dd\x2d4fae\x2dbc29\x2d2b97ca7cb503-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:34:58.056594 kubelet[2403]: I0412 18:34:58.056538 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "41ed7409-48dd-4fae-bc29-2b97ca7cb503" (UID: "41ed7409-48dd-4fae-bc29-2b97ca7cb503"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:34:58.057512 kubelet[2403]: I0412 18:34:58.057481 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ed7409-48dd-4fae-bc29-2b97ca7cb503-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "41ed7409-48dd-4fae-bc29-2b97ca7cb503" (UID: "41ed7409-48dd-4fae-bc29-2b97ca7cb503"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:34:58.059886 kubelet[2403]: I0412 18:34:58.059854 2403 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ed7409-48dd-4fae-bc29-2b97ca7cb503-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "41ed7409-48dd-4fae-bc29-2b97ca7cb503" (UID: "41ed7409-48dd-4fae-bc29-2b97ca7cb503"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:34:58.142235 kubelet[2403]: I0412 18:34:58.142200 2403 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41ed7409-48dd-4fae-bc29-2b97ca7cb503-hubble-tls\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:58.142443 kubelet[2403]: I0412 18:34:58.142431 2403 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cilium-cgroup\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:58.142535 kubelet[2403]: I0412 18:34:58.142525 2403 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41ed7409-48dd-4fae-bc29-2b97ca7cb503-clustermesh-secrets\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:58.142649 kubelet[2403]: I0412 18:34:58.142639 2403 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-host-proc-sys-kernel\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:58.142742 kubelet[2403]: I0412 18:34:58.142732 2403 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-bpf-maps\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:58.142820 kubelet[2403]: I0412 18:34:58.142810 2403 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cilium-run\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:58.142885 kubelet[2403]: I0412 18:34:58.142876 2403 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cilium-ipsec-secrets\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:58.142954 kubelet[2403]: I0412 18:34:58.142945 2403 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cilium-config-path\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:58.143020 kubelet[2403]: I0412 18:34:58.143012 2403 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fw7d7\" (UniqueName: \"kubernetes.io/projected/41ed7409-48dd-4fae-bc29-2b97ca7cb503-kube-api-access-fw7d7\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:58.143091 kubelet[2403]: I0412 18:34:58.143082 2403 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-cni-path\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:58.143159 kubelet[2403]: I0412 18:34:58.143151 2403 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-xtables-lock\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:58.143232 kubelet[2403]: I0412 18:34:58.143222 2403 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-lib-modules\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:58.143300 kubelet[2403]: I0412 18:34:58.143289 2403 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-etc-cni-netd\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:58.143373 kubelet[2403]: I0412 18:34:58.143363 2403 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-host-proc-sys-net\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:58.143435 kubelet[2403]: I0412 18:34:58.143427 2403 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41ed7409-48dd-4fae-bc29-2b97ca7cb503-hostproc\") on node \"ci-3510.3.3-a-63b2983992\" DevicePath \"\"" Apr 12 18:34:58.308905 kubelet[2403]: I0412 18:34:58.308871 2403 setters.go:548] "Node became not ready" node="ci-3510.3.3-a-63b2983992" condition={Type:Ready Status:False LastHeartbeatTime:2024-04-12 18:34:58.308768586 +0000 UTC m=+245.154609689 LastTransitionTime:2024-04-12 18:34:58.308768586 +0000 UTC m=+245.154609689 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Apr 12 18:34:58.332387 systemd[1]: var-lib-kubelet-pods-41ed7409\x2d48dd\x2d4fae\x2dbc29\x2d2b97ca7cb503-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:34:58.551952 kubelet[2403]: E0412 18:34:58.551926 2403 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:34:58.927057 kubelet[2403]: I0412 18:34:58.927021 2403 scope.go:115] "RemoveContainer" containerID="266aa2e09f6d87c13c314d71f3993dfd16cb443bd6e0aa4c4628838eaeed26bf" Apr 12 18:34:58.928967 env[1311]: time="2024-04-12T18:34:58.928660498Z" level=info msg="RemoveContainer for \"266aa2e09f6d87c13c314d71f3993dfd16cb443bd6e0aa4c4628838eaeed26bf\"" Apr 12 18:34:58.931012 systemd[1]: Removed slice kubepods-burstable-pod41ed7409_48dd_4fae_bc29_2b97ca7cb503.slice. Apr 12 18:34:58.936124 env[1311]: time="2024-04-12T18:34:58.936006591Z" level=info msg="RemoveContainer for \"266aa2e09f6d87c13c314d71f3993dfd16cb443bd6e0aa4c4628838eaeed26bf\" returns successfully" Apr 12 18:34:58.979667 kubelet[2403]: I0412 18:34:58.979614 2403 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:34:58.979827 kubelet[2403]: E0412 18:34:58.979715 2403 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="41ed7409-48dd-4fae-bc29-2b97ca7cb503" containerName="mount-cgroup" Apr 12 18:34:58.979827 kubelet[2403]: E0412 18:34:58.979729 2403 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="41ed7409-48dd-4fae-bc29-2b97ca7cb503" containerName="mount-cgroup" Apr 12 18:34:58.979827 kubelet[2403]: I0412 18:34:58.979761 2403 memory_manager.go:346] "RemoveStaleState removing state" podUID="41ed7409-48dd-4fae-bc29-2b97ca7cb503" containerName="mount-cgroup" Apr 12 18:34:58.979827 kubelet[2403]: I0412 18:34:58.979768 2403 memory_manager.go:346] "RemoveStaleState removing state" podUID="41ed7409-48dd-4fae-bc29-2b97ca7cb503" containerName="mount-cgroup" Apr 12 18:34:58.985294 systemd[1]: Created slice kubepods-burstable-pod9509f754_152c_4b36_aff2_73e09731a9a8.slice. Apr 12 18:34:59.048980 kubelet[2403]: I0412 18:34:59.048944 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9509f754-152c-4b36-aff2-73e09731a9a8-cilium-config-path\") pod \"cilium-gf7mh\" (UID: \"9509f754-152c-4b36-aff2-73e09731a9a8\") " pod="kube-system/cilium-gf7mh" Apr 12 18:34:59.049273 kubelet[2403]: I0412 18:34:59.049260 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9509f754-152c-4b36-aff2-73e09731a9a8-hubble-tls\") pod \"cilium-gf7mh\" (UID: \"9509f754-152c-4b36-aff2-73e09731a9a8\") " pod="kube-system/cilium-gf7mh" Apr 12 18:34:59.049404 kubelet[2403]: I0412 18:34:59.049392 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9509f754-152c-4b36-aff2-73e09731a9a8-cilium-run\") pod \"cilium-gf7mh\" (UID: \"9509f754-152c-4b36-aff2-73e09731a9a8\") " pod="kube-system/cilium-gf7mh" Apr 12 18:34:59.049521 kubelet[2403]: I0412 18:34:59.049510 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9509f754-152c-4b36-aff2-73e09731a9a8-bpf-maps\") pod \"cilium-gf7mh\" (UID: \"9509f754-152c-4b36-aff2-73e09731a9a8\") " pod="kube-system/cilium-gf7mh" Apr 12 18:34:59.049651 kubelet[2403]: I0412 18:34:59.049639 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9509f754-152c-4b36-aff2-73e09731a9a8-cni-path\") pod \"cilium-gf7mh\" (UID: \"9509f754-152c-4b36-aff2-73e09731a9a8\") " pod="kube-system/cilium-gf7mh" Apr 12 18:34:59.049764 kubelet[2403]: I0412 18:34:59.049754 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9509f754-152c-4b36-aff2-73e09731a9a8-lib-modules\") pod \"cilium-gf7mh\" (UID: \"9509f754-152c-4b36-aff2-73e09731a9a8\") " pod="kube-system/cilium-gf7mh" Apr 12 18:34:59.049874 kubelet[2403]: I0412 18:34:59.049863 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9509f754-152c-4b36-aff2-73e09731a9a8-clustermesh-secrets\") pod \"cilium-gf7mh\" (UID: \"9509f754-152c-4b36-aff2-73e09731a9a8\") " pod="kube-system/cilium-gf7mh" Apr 12 18:34:59.049975 kubelet[2403]: I0412 18:34:59.049965 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9509f754-152c-4b36-aff2-73e09731a9a8-hostproc\") pod \"cilium-gf7mh\" (UID: \"9509f754-152c-4b36-aff2-73e09731a9a8\") " pod="kube-system/cilium-gf7mh" Apr 12 18:34:59.050081 kubelet[2403]: I0412 18:34:59.050072 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9509f754-152c-4b36-aff2-73e09731a9a8-cilium-cgroup\") pod \"cilium-gf7mh\" (UID: \"9509f754-152c-4b36-aff2-73e09731a9a8\") " pod="kube-system/cilium-gf7mh" Apr 12 18:34:59.050182 kubelet[2403]: I0412 18:34:59.050171 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9509f754-152c-4b36-aff2-73e09731a9a8-host-proc-sys-net\") pod \"cilium-gf7mh\" (UID: \"9509f754-152c-4b36-aff2-73e09731a9a8\") " pod="kube-system/cilium-gf7mh" Apr 12 18:34:59.050282 kubelet[2403]: I0412 18:34:59.050272 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9509f754-152c-4b36-aff2-73e09731a9a8-host-proc-sys-kernel\") pod \"cilium-gf7mh\" (UID: \"9509f754-152c-4b36-aff2-73e09731a9a8\") " pod="kube-system/cilium-gf7mh" Apr 12 18:34:59.050381 kubelet[2403]: I0412 18:34:59.050372 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9509f754-152c-4b36-aff2-73e09731a9a8-xtables-lock\") pod \"cilium-gf7mh\" (UID: \"9509f754-152c-4b36-aff2-73e09731a9a8\") " pod="kube-system/cilium-gf7mh" Apr 12 18:34:59.050479 kubelet[2403]: I0412 18:34:59.050469 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9b97\" (UniqueName: \"kubernetes.io/projected/9509f754-152c-4b36-aff2-73e09731a9a8-kube-api-access-z9b97\") pod \"cilium-gf7mh\" (UID: \"9509f754-152c-4b36-aff2-73e09731a9a8\") " pod="kube-system/cilium-gf7mh" Apr 12 18:34:59.050617 kubelet[2403]: I0412 18:34:59.050604 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9509f754-152c-4b36-aff2-73e09731a9a8-cilium-ipsec-secrets\") pod \"cilium-gf7mh\" (UID: \"9509f754-152c-4b36-aff2-73e09731a9a8\") " pod="kube-system/cilium-gf7mh" Apr 12 18:34:59.050719 kubelet[2403]: I0412 18:34:59.050711 2403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9509f754-152c-4b36-aff2-73e09731a9a8-etc-cni-netd\") pod \"cilium-gf7mh\" (UID: \"9509f754-152c-4b36-aff2-73e09731a9a8\") " pod="kube-system/cilium-gf7mh" Apr 12 18:34:59.289307 env[1311]: time="2024-04-12T18:34:59.288983136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gf7mh,Uid:9509f754-152c-4b36-aff2-73e09731a9a8,Namespace:kube-system,Attempt:0,}" Apr 12 18:34:59.305330 kubelet[2403]: I0412 18:34:59.305293 2403 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=41ed7409-48dd-4fae-bc29-2b97ca7cb503 path="/var/lib/kubelet/pods/41ed7409-48dd-4fae-bc29-2b97ca7cb503/volumes" Apr 12 18:34:59.333737 env[1311]: time="2024-04-12T18:34:59.332794208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:34:59.333737 env[1311]: time="2024-04-12T18:34:59.332838008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:34:59.333737 env[1311]: time="2024-04-12T18:34:59.332848488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:34:59.333737 env[1311]: time="2024-04-12T18:34:59.332962768Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/817b77dc2a3c4024d1117f55142ddf99624534d27f688bae0d24527cf74c55f8 pid=4317 runtime=io.containerd.runc.v2 Apr 12 18:34:59.351034 systemd[1]: run-containerd-runc-k8s.io-817b77dc2a3c4024d1117f55142ddf99624534d27f688bae0d24527cf74c55f8-runc.NLoe7v.mount: Deactivated successfully. Apr 12 18:34:59.355234 systemd[1]: Started cri-containerd-817b77dc2a3c4024d1117f55142ddf99624534d27f688bae0d24527cf74c55f8.scope. Apr 12 18:34:59.379565 env[1311]: time="2024-04-12T18:34:59.379521485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gf7mh,Uid:9509f754-152c-4b36-aff2-73e09731a9a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"817b77dc2a3c4024d1117f55142ddf99624534d27f688bae0d24527cf74c55f8\"" Apr 12 18:34:59.385256 env[1311]: time="2024-04-12T18:34:59.385209215Z" level=info msg="CreateContainer within sandbox \"817b77dc2a3c4024d1117f55142ddf99624534d27f688bae0d24527cf74c55f8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:34:59.422253 env[1311]: time="2024-04-12T18:34:59.422200876Z" level=info msg="CreateContainer within sandbox \"817b77dc2a3c4024d1117f55142ddf99624534d27f688bae0d24527cf74c55f8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"57816c28c589ff8970ad78b9d9d4abf31e8ad92a5020f88a7e76d41853be8617\"" Apr 12 18:34:59.424592 env[1311]: time="2024-04-12T18:34:59.422980197Z" level=info msg="StartContainer for \"57816c28c589ff8970ad78b9d9d4abf31e8ad92a5020f88a7e76d41853be8617\"" Apr 12 18:34:59.440928 systemd[1]: Started cri-containerd-57816c28c589ff8970ad78b9d9d4abf31e8ad92a5020f88a7e76d41853be8617.scope. Apr 12 18:34:59.476105 env[1311]: time="2024-04-12T18:34:59.476040685Z" level=info msg="StartContainer for \"57816c28c589ff8970ad78b9d9d4abf31e8ad92a5020f88a7e76d41853be8617\" returns successfully" Apr 12 18:34:59.480638 systemd[1]: cri-containerd-57816c28c589ff8970ad78b9d9d4abf31e8ad92a5020f88a7e76d41853be8617.scope: Deactivated successfully. Apr 12 18:34:59.564477 env[1311]: time="2024-04-12T18:34:59.564340671Z" level=info msg="shim disconnected" id=57816c28c589ff8970ad78b9d9d4abf31e8ad92a5020f88a7e76d41853be8617 Apr 12 18:34:59.564477 env[1311]: time="2024-04-12T18:34:59.564409552Z" level=warning msg="cleaning up after shim disconnected" id=57816c28c589ff8970ad78b9d9d4abf31e8ad92a5020f88a7e76d41853be8617 namespace=k8s.io Apr 12 18:34:59.564477 env[1311]: time="2024-04-12T18:34:59.564419392Z" level=info msg="cleaning up dead shim" Apr 12 18:34:59.572874 env[1311]: time="2024-04-12T18:34:59.572823406Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:34:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4403 runtime=io.containerd.runc.v2\n" Apr 12 18:34:59.773790 kubelet[2403]: W0412 18:34:59.773634 2403 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41ed7409_48dd_4fae_bc29_2b97ca7cb503.slice/cri-containerd-09db806a39d01d885edcdec3e7b37de7d5b04289e5893efba25137c72315b912.scope WatchSource:0}: container "09db806a39d01d885edcdec3e7b37de7d5b04289e5893efba25137c72315b912" in namespace "k8s.io": not found Apr 12 18:34:59.932904 env[1311]: time="2024-04-12T18:34:59.932791961Z" level=info msg="CreateContainer within sandbox \"817b77dc2a3c4024d1117f55142ddf99624534d27f688bae0d24527cf74c55f8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:34:59.962178 env[1311]: time="2024-04-12T18:34:59.962125970Z" level=info msg="CreateContainer within sandbox \"817b77dc2a3c4024d1117f55142ddf99624534d27f688bae0d24527cf74c55f8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"587f2abfe188e8a911efd23d08c8c838a656aa90952af91966db7bd2ddf09b4c\"" Apr 12 18:34:59.962932 env[1311]: time="2024-04-12T18:34:59.962904451Z" level=info msg="StartContainer for \"587f2abfe188e8a911efd23d08c8c838a656aa90952af91966db7bd2ddf09b4c\"" Apr 12 18:34:59.978266 systemd[1]: Started cri-containerd-587f2abfe188e8a911efd23d08c8c838a656aa90952af91966db7bd2ddf09b4c.scope. Apr 12 18:35:00.014120 env[1311]: time="2024-04-12T18:35:00.013377855Z" level=info msg="StartContainer for \"587f2abfe188e8a911efd23d08c8c838a656aa90952af91966db7bd2ddf09b4c\" returns successfully" Apr 12 18:35:00.018613 systemd[1]: cri-containerd-587f2abfe188e8a911efd23d08c8c838a656aa90952af91966db7bd2ddf09b4c.scope: Deactivated successfully. Apr 12 18:35:00.065953 env[1311]: time="2024-04-12T18:35:00.065903181Z" level=info msg="shim disconnected" id=587f2abfe188e8a911efd23d08c8c838a656aa90952af91966db7bd2ddf09b4c Apr 12 18:35:00.066313 env[1311]: time="2024-04-12T18:35:00.066287582Z" level=warning msg="cleaning up after shim disconnected" id=587f2abfe188e8a911efd23d08c8c838a656aa90952af91966db7bd2ddf09b4c namespace=k8s.io Apr 12 18:35:00.066418 env[1311]: time="2024-04-12T18:35:00.066401062Z" level=info msg="cleaning up dead shim" Apr 12 18:35:00.074798 env[1311]: time="2024-04-12T18:35:00.074755916Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:35:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4468 runtime=io.containerd.runc.v2\n" Apr 12 18:35:00.938496 env[1311]: time="2024-04-12T18:35:00.938435497Z" level=info msg="CreateContainer within sandbox \"817b77dc2a3c4024d1117f55142ddf99624534d27f688bae0d24527cf74c55f8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:35:00.963988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2789931456.mount: Deactivated successfully. Apr 12 18:35:00.977155 env[1311]: time="2024-04-12T18:35:00.977097881Z" level=info msg="CreateContainer within sandbox \"817b77dc2a3c4024d1117f55142ddf99624534d27f688bae0d24527cf74c55f8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c744afb8e108691f8dcf900f2150106c6409a7fb053853ae64a9ec6979e4a54f\"" Apr 12 18:35:00.978043 env[1311]: time="2024-04-12T18:35:00.978009722Z" level=info msg="StartContainer for \"c744afb8e108691f8dcf900f2150106c6409a7fb053853ae64a9ec6979e4a54f\"" Apr 12 18:35:00.995016 systemd[1]: Started cri-containerd-c744afb8e108691f8dcf900f2150106c6409a7fb053853ae64a9ec6979e4a54f.scope. Apr 12 18:35:01.026868 systemd[1]: cri-containerd-c744afb8e108691f8dcf900f2150106c6409a7fb053853ae64a9ec6979e4a54f.scope: Deactivated successfully. Apr 12 18:35:01.031850 env[1311]: time="2024-04-12T18:35:01.031804611Z" level=info msg="StartContainer for \"c744afb8e108691f8dcf900f2150106c6409a7fb053853ae64a9ec6979e4a54f\" returns successfully" Apr 12 18:35:01.062211 env[1311]: time="2024-04-12T18:35:01.062156420Z" level=info msg="shim disconnected" id=c744afb8e108691f8dcf900f2150106c6409a7fb053853ae64a9ec6979e4a54f Apr 12 18:35:01.062211 env[1311]: time="2024-04-12T18:35:01.062208060Z" level=warning msg="cleaning up after shim disconnected" id=c744afb8e108691f8dcf900f2150106c6409a7fb053853ae64a9ec6979e4a54f namespace=k8s.io Apr 12 18:35:01.062211 env[1311]: time="2024-04-12T18:35:01.062218140Z" level=info msg="cleaning up dead shim" Apr 12 18:35:01.069287 env[1311]: time="2024-04-12T18:35:01.069236232Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:35:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4526 runtime=io.containerd.runc.v2\n" Apr 12 18:35:01.951243 env[1311]: time="2024-04-12T18:35:01.951197115Z" level=info msg="CreateContainer within sandbox \"817b77dc2a3c4024d1117f55142ddf99624534d27f688bae0d24527cf74c55f8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:35:01.988568 env[1311]: time="2024-04-12T18:35:01.988517096Z" level=info msg="CreateContainer within sandbox \"817b77dc2a3c4024d1117f55142ddf99624534d27f688bae0d24527cf74c55f8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3711caa139fddb7c58489014199518d82fb20a8ed060e381eb841b132f0d0df6\"" Apr 12 18:35:01.989545 env[1311]: time="2024-04-12T18:35:01.989504698Z" level=info msg="StartContainer for \"3711caa139fddb7c58489014199518d82fb20a8ed060e381eb841b132f0d0df6\"" Apr 12 18:35:02.010258 systemd[1]: Started cri-containerd-3711caa139fddb7c58489014199518d82fb20a8ed060e381eb841b132f0d0df6.scope. Apr 12 18:35:02.033493 systemd[1]: cri-containerd-3711caa139fddb7c58489014199518d82fb20a8ed060e381eb841b132f0d0df6.scope: Deactivated successfully. Apr 12 18:35:02.035628 env[1311]: time="2024-04-12T18:35:02.035374693Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9509f754_152c_4b36_aff2_73e09731a9a8.slice/cri-containerd-3711caa139fddb7c58489014199518d82fb20a8ed060e381eb841b132f0d0df6.scope/memory.events\": no such file or directory" Apr 12 18:35:02.040920 env[1311]: time="2024-04-12T18:35:02.040872902Z" level=info msg="StartContainer for \"3711caa139fddb7c58489014199518d82fb20a8ed060e381eb841b132f0d0df6\" returns successfully" Apr 12 18:35:02.069166 env[1311]: time="2024-04-12T18:35:02.069110668Z" level=info msg="shim disconnected" id=3711caa139fddb7c58489014199518d82fb20a8ed060e381eb841b132f0d0df6 Apr 12 18:35:02.069166 env[1311]: time="2024-04-12T18:35:02.069162348Z" level=warning msg="cleaning up after shim disconnected" id=3711caa139fddb7c58489014199518d82fb20a8ed060e381eb841b132f0d0df6 namespace=k8s.io Apr 12 18:35:02.069166 env[1311]: time="2024-04-12T18:35:02.069172668Z" level=info msg="cleaning up dead shim" Apr 12 18:35:02.077324 env[1311]: time="2024-04-12T18:35:02.077267441Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:35:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4583 runtime=io.containerd.runc.v2\n" Apr 12 18:35:02.332678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3711caa139fddb7c58489014199518d82fb20a8ed060e381eb841b132f0d0df6-rootfs.mount: Deactivated successfully. Apr 12 18:35:02.884189 kubelet[2403]: W0412 18:35:02.884140 2403 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9509f754_152c_4b36_aff2_73e09731a9a8.slice/cri-containerd-57816c28c589ff8970ad78b9d9d4abf31e8ad92a5020f88a7e76d41853be8617.scope WatchSource:0}: task 57816c28c589ff8970ad78b9d9d4abf31e8ad92a5020f88a7e76d41853be8617 not found: not found Apr 12 18:35:02.949060 env[1311]: time="2024-04-12T18:35:02.949010899Z" level=info msg="CreateContainer within sandbox \"817b77dc2a3c4024d1117f55142ddf99624534d27f688bae0d24527cf74c55f8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:35:02.984855 env[1311]: time="2024-04-12T18:35:02.984801238Z" level=info msg="CreateContainer within sandbox \"817b77dc2a3c4024d1117f55142ddf99624534d27f688bae0d24527cf74c55f8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0b8ae28bbee0909ee302aa75af0b6e5526aa12fcb9c76c623327f32e8b678f96\"" Apr 12 18:35:02.986326 env[1311]: time="2024-04-12T18:35:02.986280800Z" level=info msg="StartContainer for \"0b8ae28bbee0909ee302aa75af0b6e5526aa12fcb9c76c623327f32e8b678f96\"" Apr 12 18:35:03.014401 systemd[1]: Started cri-containerd-0b8ae28bbee0909ee302aa75af0b6e5526aa12fcb9c76c623327f32e8b678f96.scope. Apr 12 18:35:03.053111 env[1311]: time="2024-04-12T18:35:03.053049108Z" level=info msg="StartContainer for \"0b8ae28bbee0909ee302aa75af0b6e5526aa12fcb9c76c623327f32e8b678f96\" returns successfully" Apr 12 18:35:03.332705 systemd[1]: run-containerd-runc-k8s.io-0b8ae28bbee0909ee302aa75af0b6e5526aa12fcb9c76c623327f32e8b678f96-runc.Cun1SR.mount: Deactivated successfully. Apr 12 18:35:03.506990 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Apr 12 18:35:05.999941 kubelet[2403]: W0412 18:35:05.999848 2403 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9509f754_152c_4b36_aff2_73e09731a9a8.slice/cri-containerd-587f2abfe188e8a911efd23d08c8c838a656aa90952af91966db7bd2ddf09b4c.scope WatchSource:0}: task 587f2abfe188e8a911efd23d08c8c838a656aa90952af91966db7bd2ddf09b4c not found: not found Apr 12 18:35:06.109716 systemd-networkd[1458]: lxc_health: Link UP Apr 12 18:35:06.129655 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:35:06.133058 systemd-networkd[1458]: lxc_health: Gained carrier Apr 12 18:35:06.182471 systemd[1]: run-containerd-runc-k8s.io-0b8ae28bbee0909ee302aa75af0b6e5526aa12fcb9c76c623327f32e8b678f96-runc.02dYpy.mount: Deactivated successfully. Apr 12 18:35:07.219740 systemd-networkd[1458]: lxc_health: Gained IPv6LL Apr 12 18:35:07.309674 kubelet[2403]: I0412 18:35:07.309637 2403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gf7mh" podStartSLOduration=9.309596692 podCreationTimestamp="2024-04-12 18:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:35:03.977512284 +0000 UTC m=+250.823353387" watchObservedRunningTime="2024-04-12 18:35:07.309596692 +0000 UTC m=+254.155437795" Apr 12 18:35:09.108590 kubelet[2403]: W0412 18:35:09.108531 2403 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9509f754_152c_4b36_aff2_73e09731a9a8.slice/cri-containerd-c744afb8e108691f8dcf900f2150106c6409a7fb053853ae64a9ec6979e4a54f.scope WatchSource:0}: task c744afb8e108691f8dcf900f2150106c6409a7fb053853ae64a9ec6979e4a54f not found: not found Apr 12 18:35:10.533282 systemd[1]: run-containerd-runc-k8s.io-0b8ae28bbee0909ee302aa75af0b6e5526aa12fcb9c76c623327f32e8b678f96-runc.En2WQJ.mount: Deactivated successfully. Apr 12 18:35:12.217487 kubelet[2403]: W0412 18:35:12.217445 2403 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9509f754_152c_4b36_aff2_73e09731a9a8.slice/cri-containerd-3711caa139fddb7c58489014199518d82fb20a8ed060e381eb841b132f0d0df6.scope WatchSource:0}: task 3711caa139fddb7c58489014199518d82fb20a8ed060e381eb841b132f0d0df6 not found: not found Apr 12 18:35:12.805027 sshd[4261]: pam_unix(sshd:session): session closed for user core Apr 12 18:35:12.808194 systemd-logind[1298]: Session 26 logged out. Waiting for processes to exit. Apr 12 18:35:12.808278 systemd[1]: session-26.scope: Deactivated successfully. Apr 12 18:35:12.808934 systemd[1]: sshd@23-10.200.20.17:22-10.200.12.6:52608.service: Deactivated successfully. Apr 12 18:35:12.810167 systemd-logind[1298]: Removed session 26. Apr 12 18:35:26.603566 systemd[1]: cri-containerd-9c296159c2db4c0452849acc9c5ed1ae3470d968b13b80f95192331423e2b940.scope: Deactivated successfully. Apr 12 18:35:26.603898 systemd[1]: cri-containerd-9c296159c2db4c0452849acc9c5ed1ae3470d968b13b80f95192331423e2b940.scope: Consumed 3.010s CPU time. Apr 12 18:35:26.624038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c296159c2db4c0452849acc9c5ed1ae3470d968b13b80f95192331423e2b940-rootfs.mount: Deactivated successfully. Apr 12 18:35:26.647278 env[1311]: time="2024-04-12T18:35:26.647226057Z" level=info msg="shim disconnected" id=9c296159c2db4c0452849acc9c5ed1ae3470d968b13b80f95192331423e2b940 Apr 12 18:35:26.647842 env[1311]: time="2024-04-12T18:35:26.647816338Z" level=warning msg="cleaning up after shim disconnected" id=9c296159c2db4c0452849acc9c5ed1ae3470d968b13b80f95192331423e2b940 namespace=k8s.io Apr 12 18:35:26.647943 env[1311]: time="2024-04-12T18:35:26.647927818Z" level=info msg="cleaning up dead shim" Apr 12 18:35:26.655865 env[1311]: time="2024-04-12T18:35:26.655822269Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:35:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5256 runtime=io.containerd.runc.v2\n" Apr 12 18:35:26.746145 kubelet[2403]: E0412 18:35:26.744755 2403 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.17:43024->10.200.20.33:2379: read: connection timed out" Apr 12 18:35:26.746309 systemd[1]: cri-containerd-ba23c22423fc92b11155d76d9f04dcb552d4c7af3a2149af2cb1e62b449c41e7.scope: Deactivated successfully. Apr 12 18:35:26.746632 systemd[1]: cri-containerd-ba23c22423fc92b11155d76d9f04dcb552d4c7af3a2149af2cb1e62b449c41e7.scope: Consumed 3.422s CPU time. Apr 12 18:35:26.767040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba23c22423fc92b11155d76d9f04dcb552d4c7af3a2149af2cb1e62b449c41e7-rootfs.mount: Deactivated successfully. Apr 12 18:35:26.783249 env[1311]: time="2024-04-12T18:35:26.783189933Z" level=info msg="shim disconnected" id=ba23c22423fc92b11155d76d9f04dcb552d4c7af3a2149af2cb1e62b449c41e7 Apr 12 18:35:26.783249 env[1311]: time="2024-04-12T18:35:26.783243853Z" level=warning msg="cleaning up after shim disconnected" id=ba23c22423fc92b11155d76d9f04dcb552d4c7af3a2149af2cb1e62b449c41e7 namespace=k8s.io Apr 12 18:35:26.783249 env[1311]: time="2024-04-12T18:35:26.783254093Z" level=info msg="cleaning up dead shim" Apr 12 18:35:26.790550 env[1311]: time="2024-04-12T18:35:26.790491504Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:35:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5282 runtime=io.containerd.runc.v2\n" Apr 12 18:35:26.998107 kubelet[2403]: I0412 18:35:26.997305 2403 scope.go:115] "RemoveContainer" containerID="9c296159c2db4c0452849acc9c5ed1ae3470d968b13b80f95192331423e2b940" Apr 12 18:35:27.001597 env[1311]: time="2024-04-12T18:35:27.001518009Z" level=info msg="CreateContainer within sandbox \"626d7a19a255b7d060eed91d2e988dfd27ed559ba08728e15461bf46cd373789\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 12 18:35:27.002103 kubelet[2403]: I0412 18:35:27.002083 2403 scope.go:115] "RemoveContainer" containerID="ba23c22423fc92b11155d76d9f04dcb552d4c7af3a2149af2cb1e62b449c41e7" Apr 12 18:35:27.005516 env[1311]: time="2024-04-12T18:35:27.005471014Z" level=info msg="CreateContainer within sandbox \"0f2614d7beeb31e45f5ec9bbdeb93732b08a34b623e1f1952d9be4c6d0c9bdc3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 12 18:35:27.046608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3603468831.mount: Deactivated successfully. Apr 12 18:35:27.069826 env[1311]: time="2024-04-12T18:35:27.069747827Z" level=info msg="CreateContainer within sandbox \"626d7a19a255b7d060eed91d2e988dfd27ed559ba08728e15461bf46cd373789\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3e8346b180ac52bcf1bf3320edafa917b13b643a54db454e65012efe01744dfc\"" Apr 12 18:35:27.070474 env[1311]: time="2024-04-12T18:35:27.070441908Z" level=info msg="StartContainer for \"3e8346b180ac52bcf1bf3320edafa917b13b643a54db454e65012efe01744dfc\"" Apr 12 18:35:27.074264 env[1311]: time="2024-04-12T18:35:27.074211153Z" level=info msg="CreateContainer within sandbox \"0f2614d7beeb31e45f5ec9bbdeb93732b08a34b623e1f1952d9be4c6d0c9bdc3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a0d773a0a9bce5450fb4a9ec0e3980d18f64fc064632f6e7591023195751454a\"" Apr 12 18:35:27.075186 env[1311]: time="2024-04-12T18:35:27.075145154Z" level=info msg="StartContainer for \"a0d773a0a9bce5450fb4a9ec0e3980d18f64fc064632f6e7591023195751454a\"" Apr 12 18:35:27.100603 systemd[1]: Started cri-containerd-3e8346b180ac52bcf1bf3320edafa917b13b643a54db454e65012efe01744dfc.scope. Apr 12 18:35:27.113007 systemd[1]: Started cri-containerd-a0d773a0a9bce5450fb4a9ec0e3980d18f64fc064632f6e7591023195751454a.scope. Apr 12 18:35:27.175191 env[1311]: time="2024-04-12T18:35:27.175144058Z" level=info msg="StartContainer for \"3e8346b180ac52bcf1bf3320edafa917b13b643a54db454e65012efe01744dfc\" returns successfully" Apr 12 18:35:27.179490 env[1311]: time="2024-04-12T18:35:27.179428944Z" level=info msg="StartContainer for \"a0d773a0a9bce5450fb4a9ec0e3980d18f64fc064632f6e7591023195751454a\" returns successfully" Apr 12 18:35:27.625422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount456108865.mount: Deactivated successfully. Apr 12 18:35:30.644679 kubelet[2403]: E0412 18:35:30.644529 2403 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.3-a-63b2983992.17c59c25f7c4789d", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.3-a-63b2983992", UID:"071c2731125a0f6248128f48edc3a4f8", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.3-a-63b2983992"}, FirstTimestamp:time.Date(2024, time.April, 12, 18, 35, 20, 168913053, time.Local), LastTimestamp:time.Date(2024, time.April, 12, 18, 35, 20, 168913053, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.17:42834->10.200.20.33:2379: read: connection timed out' (will not retry!) Apr 12 18:35:36.745325 kubelet[2403]: E0412 18:35:36.745245 2403 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-63b2983992?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 12 18:35:37.428453 kubelet[2403]: I0412 18:35:37.428420 2403 status_manager.go:809] "Failed to get status for pod" podUID=32cf39eb5344122991b7bba21140e9d4 pod="kube-system/kube-controller-manager-ci-3510.3.3-a-63b2983992" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.17:42922->10.200.20.33:2379: read: connection timed out" Apr 12 18:35:46.746017 kubelet[2403]: E0412 18:35:46.745974 2403 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-63b2983992?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"