Apr 12 18:29:05.003191 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 12 18:29:05.003210 kernel: Linux version 5.15.154-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Apr 12 17:21:24 -00 2024 Apr 12 18:29:05.003217 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 12 18:29:05.003224 kernel: printk: bootconsole [pl11] enabled Apr 12 18:29:05.003230 kernel: efi: EFI v2.70 by EDK II Apr 12 18:29:05.003235 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef2e698 RNG=0x3fd89998 MEMRESERVE=0x37b33f98 Apr 12 18:29:05.003242 kernel: random: crng init done Apr 12 18:29:05.003247 kernel: ACPI: Early table checksum verification disabled Apr 12 18:29:05.003253 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Apr 12 18:29:05.003258 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:29:05.003263 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:29:05.003270 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Apr 12 18:29:05.003275 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:29:05.003281 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:29:05.003288 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:29:05.003293 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:29:05.003299 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:29:05.003306 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:29:05.003312 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 12 18:29:05.003318 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 12 18:29:05.003324 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 12 18:29:05.003330 kernel: NUMA: Failed to initialise from firmware Apr 12 18:29:05.003335 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Apr 12 18:29:05.003341 kernel: NUMA: NODE_DATA [mem 0x1bf7f1900-0x1bf7f6fff] Apr 12 18:29:05.003347 kernel: Zone ranges: Apr 12 18:29:05.003352 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 12 18:29:05.003358 kernel: DMA32 empty Apr 12 18:29:05.003365 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 12 18:29:05.003371 kernel: Movable zone start for each node Apr 12 18:29:05.003376 kernel: Early memory node ranges Apr 12 18:29:05.003382 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 12 18:29:05.003388 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Apr 12 18:29:05.003393 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Apr 12 18:29:05.003399 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Apr 12 18:29:05.003405 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Apr 12 18:29:05.003410 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Apr 12 18:29:05.003416 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Apr 12 18:29:05.003422 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Apr 12 18:29:05.003427 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 12 18:29:05.003434 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 12 18:29:05.003443 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 12 18:29:05.003449 kernel: psci: probing for conduit method from ACPI. Apr 12 18:29:05.003455 kernel: psci: PSCIv1.1 detected in firmware. Apr 12 18:29:05.003461 kernel: psci: Using standard PSCI v0.2 function IDs Apr 12 18:29:05.003469 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 12 18:29:05.003475 kernel: psci: SMC Calling Convention v1.4 Apr 12 18:29:05.003481 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Apr 12 18:29:05.003487 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Apr 12 18:29:05.003493 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Apr 12 18:29:05.003500 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Apr 12 18:29:05.003506 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 12 18:29:05.003512 kernel: Detected PIPT I-cache on CPU0 Apr 12 18:29:05.003518 kernel: CPU features: detected: GIC system register CPU interface Apr 12 18:29:05.003525 kernel: CPU features: detected: Hardware dirty bit management Apr 12 18:29:05.003531 kernel: CPU features: detected: Spectre-BHB Apr 12 18:29:05.003537 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 12 18:29:05.003544 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 12 18:29:05.003550 kernel: CPU features: detected: ARM erratum 1418040 Apr 12 18:29:05.003556 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Apr 12 18:29:05.003563 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Apr 12 18:29:05.003569 kernel: Policy zone: Normal Apr 12 18:29:05.003576 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c0b96868344262519ffdb2dae3782c942008a0fecdbc0bc85d2e170bd2e8b8a8 Apr 12 18:29:05.003583 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 12 18:29:05.003589 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 12 18:29:05.003595 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 12 18:29:05.003601 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 12 18:29:05.003609 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Apr 12 18:29:05.003615 kernel: Memory: 3990260K/4194160K available (9792K kernel code, 2092K rwdata, 7568K rodata, 36352K init, 777K bss, 203900K reserved, 0K cma-reserved) Apr 12 18:29:05.003622 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 12 18:29:05.003628 kernel: trace event string verifier disabled Apr 12 18:29:05.003634 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 12 18:29:05.003640 kernel: rcu: RCU event tracing is enabled. Apr 12 18:29:05.003646 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 12 18:29:05.003653 kernel: Trampoline variant of Tasks RCU enabled. Apr 12 18:29:05.003659 kernel: Tracing variant of Tasks RCU enabled. Apr 12 18:29:05.003665 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 12 18:29:05.003671 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 12 18:29:05.011018 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 12 18:29:05.011038 kernel: GICv3: 960 SPIs implemented Apr 12 18:29:05.011045 kernel: GICv3: 0 Extended SPIs implemented Apr 12 18:29:05.011052 kernel: GICv3: Distributor has no Range Selector support Apr 12 18:29:05.011058 kernel: Root IRQ handler: gic_handle_irq Apr 12 18:29:05.011065 kernel: GICv3: 16 PPIs implemented Apr 12 18:29:05.011071 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 12 18:29:05.011078 kernel: ITS: No ITS available, not enabling LPIs Apr 12 18:29:05.011085 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 12 18:29:05.011091 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 12 18:29:05.011098 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 12 18:29:05.011105 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 12 18:29:05.011117 kernel: Console: colour dummy device 80x25 Apr 12 18:29:05.011124 kernel: printk: console [tty1] enabled Apr 12 18:29:05.011131 kernel: ACPI: Core revision 20210730 Apr 12 18:29:05.011137 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 12 18:29:05.011144 kernel: pid_max: default: 32768 minimum: 301 Apr 12 18:29:05.011150 kernel: LSM: Security Framework initializing Apr 12 18:29:05.011156 kernel: SELinux: Initializing. Apr 12 18:29:05.011163 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:29:05.011169 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:29:05.011177 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Apr 12 18:29:05.011184 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Apr 12 18:29:05.011191 kernel: rcu: Hierarchical SRCU implementation. Apr 12 18:29:05.011197 kernel: Remapping and enabling EFI services. Apr 12 18:29:05.011203 kernel: smp: Bringing up secondary CPUs ... Apr 12 18:29:05.011210 kernel: Detected PIPT I-cache on CPU1 Apr 12 18:29:05.011217 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 12 18:29:05.011223 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 12 18:29:05.011230 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 12 18:29:05.011238 kernel: smp: Brought up 1 node, 2 CPUs Apr 12 18:29:05.011244 kernel: SMP: Total of 2 processors activated. Apr 12 18:29:05.011251 kernel: CPU features: detected: 32-bit EL0 Support Apr 12 18:29:05.011258 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 12 18:29:05.011264 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 12 18:29:05.011271 kernel: CPU features: detected: CRC32 instructions Apr 12 18:29:05.011277 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 12 18:29:05.011284 kernel: CPU features: detected: LSE atomic instructions Apr 12 18:29:05.011290 kernel: CPU features: detected: Privileged Access Never Apr 12 18:29:05.011298 kernel: CPU: All CPU(s) started at EL1 Apr 12 18:29:05.011305 kernel: alternatives: patching kernel code Apr 12 18:29:05.011316 kernel: devtmpfs: initialized Apr 12 18:29:05.011324 kernel: KASLR enabled Apr 12 18:29:05.011331 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 12 18:29:05.011338 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 12 18:29:05.011345 kernel: pinctrl core: initialized pinctrl subsystem Apr 12 18:29:05.011351 kernel: SMBIOS 3.1.0 present. Apr 12 18:29:05.011358 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Apr 12 18:29:05.011365 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 12 18:29:05.011374 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 12 18:29:05.011381 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 12 18:29:05.011388 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 12 18:29:05.011394 kernel: audit: initializing netlink subsys (disabled) Apr 12 18:29:05.011401 kernel: audit: type=2000 audit(0.087:1): state=initialized audit_enabled=0 res=1 Apr 12 18:29:05.011408 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 12 18:29:05.011415 kernel: cpuidle: using governor menu Apr 12 18:29:05.011423 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 12 18:29:05.011430 kernel: ASID allocator initialised with 32768 entries Apr 12 18:29:05.011437 kernel: ACPI: bus type PCI registered Apr 12 18:29:05.011443 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 12 18:29:05.011450 kernel: Serial: AMBA PL011 UART driver Apr 12 18:29:05.011457 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Apr 12 18:29:05.011464 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Apr 12 18:29:05.011471 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Apr 12 18:29:05.011477 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Apr 12 18:29:05.011485 kernel: cryptd: max_cpu_qlen set to 1000 Apr 12 18:29:05.011492 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 12 18:29:05.011499 kernel: ACPI: Added _OSI(Module Device) Apr 12 18:29:05.011505 kernel: ACPI: Added _OSI(Processor Device) Apr 12 18:29:05.011512 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 12 18:29:05.011519 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 12 18:29:05.011526 kernel: ACPI: Added _OSI(Linux-Dell-Video) Apr 12 18:29:05.011532 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Apr 12 18:29:05.011539 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Apr 12 18:29:05.011547 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 12 18:29:05.011554 kernel: ACPI: Interpreter enabled Apr 12 18:29:05.011560 kernel: ACPI: Using GIC for interrupt routing Apr 12 18:29:05.011567 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 12 18:29:05.011574 kernel: printk: console [ttyAMA0] enabled Apr 12 18:29:05.011581 kernel: printk: bootconsole [pl11] disabled Apr 12 18:29:05.011587 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 12 18:29:05.011594 kernel: iommu: Default domain type: Translated Apr 12 18:29:05.011601 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 12 18:29:05.011609 kernel: vgaarb: loaded Apr 12 18:29:05.011615 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 12 18:29:05.011623 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 12 18:29:05.011629 kernel: PTP clock support registered Apr 12 18:29:05.011636 kernel: Registered efivars operations Apr 12 18:29:05.011643 kernel: No ACPI PMU IRQ for CPU0 Apr 12 18:29:05.011650 kernel: No ACPI PMU IRQ for CPU1 Apr 12 18:29:05.011656 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 12 18:29:05.011663 kernel: VFS: Disk quotas dquot_6.6.0 Apr 12 18:29:05.011671 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 12 18:29:05.011700 kernel: pnp: PnP ACPI init Apr 12 18:29:05.011707 kernel: pnp: PnP ACPI: found 0 devices Apr 12 18:29:05.011714 kernel: NET: Registered PF_INET protocol family Apr 12 18:29:05.011721 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 12 18:29:05.011728 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 12 18:29:05.011735 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 12 18:29:05.011742 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 12 18:29:05.011749 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Apr 12 18:29:05.011757 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 12 18:29:05.011764 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:29:05.011771 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:29:05.011778 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 12 18:29:05.011785 kernel: PCI: CLS 0 bytes, default 64 Apr 12 18:29:05.011791 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Apr 12 18:29:05.011798 kernel: kvm [1]: HYP mode not available Apr 12 18:29:05.011805 kernel: Initialise system trusted keyrings Apr 12 18:29:05.011812 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 12 18:29:05.011820 kernel: Key type asymmetric registered Apr 12 18:29:05.011826 kernel: Asymmetric key parser 'x509' registered Apr 12 18:29:05.011833 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 12 18:29:05.011840 kernel: io scheduler mq-deadline registered Apr 12 18:29:05.011846 kernel: io scheduler kyber registered Apr 12 18:29:05.011853 kernel: io scheduler bfq registered Apr 12 18:29:05.011860 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 12 18:29:05.011867 kernel: thunder_xcv, ver 1.0 Apr 12 18:29:05.011873 kernel: thunder_bgx, ver 1.0 Apr 12 18:29:05.011881 kernel: nicpf, ver 1.0 Apr 12 18:29:05.011888 kernel: nicvf, ver 1.0 Apr 12 18:29:05.012023 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 12 18:29:05.012085 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-04-12T18:29:04 UTC (1712946544) Apr 12 18:29:05.012094 kernel: efifb: probing for efifb Apr 12 18:29:05.012101 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 12 18:29:05.012108 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 12 18:29:05.012115 kernel: efifb: scrolling: redraw Apr 12 18:29:05.012124 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 12 18:29:05.012131 kernel: Console: switching to colour frame buffer device 128x48 Apr 12 18:29:05.012137 kernel: fb0: EFI VGA frame buffer device Apr 12 18:29:05.012144 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 12 18:29:05.012151 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 12 18:29:05.012158 kernel: NET: Registered PF_INET6 protocol family Apr 12 18:29:05.012165 kernel: Segment Routing with IPv6 Apr 12 18:29:05.012171 kernel: In-situ OAM (IOAM) with IPv6 Apr 12 18:29:05.012178 kernel: NET: Registered PF_PACKET protocol family Apr 12 18:29:05.012186 kernel: Key type dns_resolver registered Apr 12 18:29:05.012193 kernel: registered taskstats version 1 Apr 12 18:29:05.012199 kernel: Loading compiled-in X.509 certificates Apr 12 18:29:05.012207 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.154-flatcar: 8c258d82bbd8df4a9da2c0ea4108142f04be6b34' Apr 12 18:29:05.012213 kernel: Key type .fscrypt registered Apr 12 18:29:05.012220 kernel: Key type fscrypt-provisioning registered Apr 12 18:29:05.012226 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 12 18:29:05.012233 kernel: ima: Allocated hash algorithm: sha1 Apr 12 18:29:05.012241 kernel: ima: No architecture policies found Apr 12 18:29:05.012249 kernel: Freeing unused kernel memory: 36352K Apr 12 18:29:05.012255 kernel: Run /init as init process Apr 12 18:29:05.012262 kernel: with arguments: Apr 12 18:29:05.012269 kernel: /init Apr 12 18:29:05.012275 kernel: with environment: Apr 12 18:29:05.012282 kernel: HOME=/ Apr 12 18:29:05.012288 kernel: TERM=linux Apr 12 18:29:05.012295 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 12 18:29:05.012304 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:29:05.012314 systemd[1]: Detected virtualization microsoft. Apr 12 18:29:05.012322 systemd[1]: Detected architecture arm64. Apr 12 18:29:05.012329 systemd[1]: Running in initrd. Apr 12 18:29:05.012336 systemd[1]: No hostname configured, using default hostname. Apr 12 18:29:05.012343 systemd[1]: Hostname set to . Apr 12 18:29:05.012350 systemd[1]: Initializing machine ID from random generator. Apr 12 18:29:05.012357 systemd[1]: Queued start job for default target initrd.target. Apr 12 18:29:05.012366 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:29:05.012373 systemd[1]: Reached target cryptsetup.target. Apr 12 18:29:05.012381 systemd[1]: Reached target paths.target. Apr 12 18:29:05.012388 systemd[1]: Reached target slices.target. Apr 12 18:29:05.012394 systemd[1]: Reached target swap.target. Apr 12 18:29:05.012401 systemd[1]: Reached target timers.target. Apr 12 18:29:05.012409 systemd[1]: Listening on iscsid.socket. Apr 12 18:29:05.012416 systemd[1]: Listening on iscsiuio.socket. Apr 12 18:29:05.012424 systemd[1]: Listening on systemd-journald-audit.socket. Apr 12 18:29:05.012431 systemd[1]: Listening on systemd-journald-dev-log.socket. Apr 12 18:29:05.012438 systemd[1]: Listening on systemd-journald.socket. Apr 12 18:29:05.012446 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:29:05.012453 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:29:05.012460 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:29:05.012467 systemd[1]: Reached target sockets.target. Apr 12 18:29:05.012474 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:29:05.012481 systemd[1]: Finished network-cleanup.service. Apr 12 18:29:05.012490 systemd[1]: Starting systemd-fsck-usr.service... Apr 12 18:29:05.012497 systemd[1]: Starting systemd-journald.service... Apr 12 18:29:05.012504 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:29:05.012511 systemd[1]: Starting systemd-resolved.service... Apr 12 18:29:05.012518 systemd[1]: Starting systemd-vconsole-setup.service... Apr 12 18:29:05.012529 systemd-journald[236]: Journal started Apr 12 18:29:05.012572 systemd-journald[236]: Runtime Journal (/run/log/journal/a8782c43caa44e319ec1a42ff26319aa) is 8.0M, max 78.6M, 70.6M free. Apr 12 18:29:04.990834 systemd-modules-load[237]: Inserted module 'overlay' Apr 12 18:29:05.043941 systemd[1]: Started systemd-journald.service. Apr 12 18:29:05.043998 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 12 18:29:05.037795 systemd-resolved[238]: Positive Trust Anchors: Apr 12 18:29:05.087120 kernel: audit: type=1130 audit(1712946545.048:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.087144 kernel: Bridge firewalling registered Apr 12 18:29:05.087153 kernel: SCSI subsystem initialized Apr 12 18:29:05.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.037805 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:29:05.114235 kernel: audit: type=1130 audit(1712946545.091:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.037833 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:29:05.183278 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 12 18:29:05.183308 kernel: audit: type=1130 audit(1712946545.129:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.183318 kernel: device-mapper: uevent: version 1.0.3 Apr 12 18:29:05.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.039979 systemd-resolved[238]: Defaulting to hostname 'linux'. Apr 12 18:29:05.216770 kernel: audit: type=1130 audit(1712946545.188:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.216797 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Apr 12 18:29:05.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.068388 systemd[1]: Started systemd-resolved.service. Apr 12 18:29:05.246044 kernel: audit: type=1130 audit(1712946545.222:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.073368 systemd-modules-load[237]: Inserted module 'br_netfilter' Apr 12 18:29:05.106288 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:29:05.130423 systemd[1]: Finished systemd-fsck-usr.service. Apr 12 18:29:05.188542 systemd[1]: Finished systemd-vconsole-setup.service. Apr 12 18:29:05.223212 systemd[1]: Reached target nss-lookup.target. Apr 12 18:29:05.256377 systemd[1]: Starting dracut-cmdline-ask.service... Apr 12 18:29:05.256495 systemd-modules-load[237]: Inserted module 'dm_multipath' Apr 12 18:29:05.318401 kernel: audit: type=1130 audit(1712946545.297:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.272223 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:29:05.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.288796 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:29:05.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.297738 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:29:05.381693 kernel: audit: type=1130 audit(1712946545.322:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.381718 kernel: audit: type=1130 audit(1712946545.348:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.323422 systemd[1]: Finished dracut-cmdline-ask.service. Apr 12 18:29:05.353349 systemd[1]: Starting dracut-cmdline.service... Apr 12 18:29:05.394425 dracut-cmdline[257]: dracut-dracut-053 Apr 12 18:29:05.374503 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:29:05.403311 dracut-cmdline[257]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c0b96868344262519ffdb2dae3782c942008a0fecdbc0bc85d2e170bd2e8b8a8 Apr 12 18:29:05.454430 kernel: audit: type=1130 audit(1712946545.407:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.395568 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:29:05.505702 kernel: Loading iSCSI transport class v2.0-870. Apr 12 18:29:05.521855 kernel: iscsi: registered transport (tcp) Apr 12 18:29:05.542396 kernel: iscsi: registered transport (qla4xxx) Apr 12 18:29:05.542436 kernel: QLogic iSCSI HBA Driver Apr 12 18:29:05.572498 systemd[1]: Finished dracut-cmdline.service. Apr 12 18:29:05.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:05.578068 systemd[1]: Starting dracut-pre-udev.service... Apr 12 18:29:05.634717 kernel: raid6: neonx8 gen() 13819 MB/s Apr 12 18:29:05.651692 kernel: raid6: neonx8 xor() 10842 MB/s Apr 12 18:29:05.671688 kernel: raid6: neonx4 gen() 13524 MB/s Apr 12 18:29:05.692689 kernel: raid6: neonx4 xor() 11310 MB/s Apr 12 18:29:05.712687 kernel: raid6: neonx2 gen() 13032 MB/s Apr 12 18:29:05.733688 kernel: raid6: neonx2 xor() 10426 MB/s Apr 12 18:29:05.754689 kernel: raid6: neonx1 gen() 10535 MB/s Apr 12 18:29:05.775687 kernel: raid6: neonx1 xor() 8801 MB/s Apr 12 18:29:05.795687 kernel: raid6: int64x8 gen() 6272 MB/s Apr 12 18:29:05.816689 kernel: raid6: int64x8 xor() 3545 MB/s Apr 12 18:29:05.836687 kernel: raid6: int64x4 gen() 7199 MB/s Apr 12 18:29:05.856692 kernel: raid6: int64x4 xor() 3859 MB/s Apr 12 18:29:05.877693 kernel: raid6: int64x2 gen() 6155 MB/s Apr 12 18:29:05.897691 kernel: raid6: int64x2 xor() 3322 MB/s Apr 12 18:29:05.917688 kernel: raid6: int64x1 gen() 5050 MB/s Apr 12 18:29:05.942788 kernel: raid6: int64x1 xor() 2646 MB/s Apr 12 18:29:05.942808 kernel: raid6: using algorithm neonx8 gen() 13819 MB/s Apr 12 18:29:05.942825 kernel: raid6: .... xor() 10842 MB/s, rmw enabled Apr 12 18:29:05.947039 kernel: raid6: using neon recovery algorithm Apr 12 18:29:05.963699 kernel: xor: measuring software checksum speed Apr 12 18:29:05.971887 kernel: 8regs : 17286 MB/sec Apr 12 18:29:05.971896 kernel: 32regs : 20765 MB/sec Apr 12 18:29:05.976477 kernel: arm64_neon : 27911 MB/sec Apr 12 18:29:05.976486 kernel: xor: using function: arm64_neon (27911 MB/sec) Apr 12 18:29:06.037702 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Apr 12 18:29:06.047076 systemd[1]: Finished dracut-pre-udev.service. Apr 12 18:29:06.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:06.054000 audit: BPF prog-id=7 op=LOAD Apr 12 18:29:06.054000 audit: BPF prog-id=8 op=LOAD Apr 12 18:29:06.055627 systemd[1]: Starting systemd-udevd.service... Apr 12 18:29:06.070151 systemd-udevd[437]: Using default interface naming scheme 'v252'. Apr 12 18:29:06.076919 systemd[1]: Started systemd-udevd.service. Apr 12 18:29:06.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:06.087469 systemd[1]: Starting dracut-pre-trigger.service... Apr 12 18:29:06.102567 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation Apr 12 18:29:06.131243 systemd[1]: Finished dracut-pre-trigger.service. Apr 12 18:29:06.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:06.136716 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:29:06.174112 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:29:06.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:06.227719 kernel: hv_vmbus: Vmbus version:5.3 Apr 12 18:29:06.253358 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 12 18:29:06.253412 kernel: hv_vmbus: registering driver hv_netvsc Apr 12 18:29:06.253421 kernel: hv_vmbus: registering driver hid_hyperv Apr 12 18:29:06.253430 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Apr 12 18:29:06.253438 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 12 18:29:06.273551 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Apr 12 18:29:06.287707 kernel: hv_vmbus: registering driver hv_storvsc Apr 12 18:29:06.287754 kernel: scsi host0: storvsc_host_t Apr 12 18:29:06.295079 kernel: scsi host1: storvsc_host_t Apr 12 18:29:06.295124 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 12 18:29:06.309727 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Apr 12 18:29:06.328537 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 12 18:29:06.328818 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 12 18:29:06.335053 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 12 18:29:06.335249 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 12 18:29:06.335342 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 12 18:29:06.343483 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 12 18:29:06.343713 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 12 18:29:06.343808 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 12 18:29:06.353711 kernel: hv_netvsc 000d3a07-6d66-000d-3a07-6d66000d3a07 eth0: VF slot 1 added Apr 12 18:29:06.361864 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 18:29:06.368458 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 12 18:29:06.375717 kernel: hv_vmbus: registering driver hv_pci Apr 12 18:29:06.387059 kernel: hv_pci 6bebb8f2-032b-4695-bd8f-33e401cbbe9d: PCI VMBus probing: Using version 0x10004 Apr 12 18:29:06.387241 kernel: hv_pci 6bebb8f2-032b-4695-bd8f-33e401cbbe9d: PCI host bridge to bus 032b:00 Apr 12 18:29:06.399345 kernel: pci_bus 032b:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 12 18:29:06.399533 kernel: pci_bus 032b:00: No busn resource found for root bus, will use [bus 00-ff] Apr 12 18:29:06.417290 kernel: pci 032b:00:02.0: [15b3:1018] type 00 class 0x020000 Apr 12 18:29:06.430047 kernel: pci 032b:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 12 18:29:06.452774 kernel: pci 032b:00:02.0: enabling Extended Tags Apr 12 18:29:06.470704 kernel: pci 032b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 032b:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Apr 12 18:29:06.482608 kernel: pci_bus 032b:00: busn_res: [bus 00-ff] end is updated to 00 Apr 12 18:29:06.482803 kernel: pci 032b:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 12 18:29:06.523705 kernel: mlx5_core 032b:00:02.0: firmware version: 16.30.1284 Apr 12 18:29:06.688789 kernel: mlx5_core 032b:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Apr 12 18:29:06.749445 kernel: hv_netvsc 000d3a07-6d66-000d-3a07-6d66000d3a07 eth0: VF registering: eth1 Apr 12 18:29:06.749702 kernel: mlx5_core 032b:00:02.0 eth1: joined to eth0 Apr 12 18:29:06.761722 kernel: mlx5_core 032b:00:02.0 enP811s1: renamed from eth1 Apr 12 18:29:06.866205 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Apr 12 18:29:06.932704 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (501) Apr 12 18:29:06.946580 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:29:07.088316 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Apr 12 18:29:07.169007 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Apr 12 18:29:07.175609 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Apr 12 18:29:07.188599 systemd[1]: Starting disk-uuid.service... Apr 12 18:29:07.217718 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 18:29:08.219832 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 18:29:08.220495 disk-uuid[564]: The operation has completed successfully. Apr 12 18:29:08.276346 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 12 18:29:08.278924 systemd[1]: Finished disk-uuid.service. Apr 12 18:29:08.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:08.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:08.295908 systemd[1]: Starting verity-setup.service... Apr 12 18:29:08.344606 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 12 18:29:08.613901 systemd[1]: Found device dev-mapper-usr.device. Apr 12 18:29:08.619871 systemd[1]: Mounting sysusr-usr.mount... Apr 12 18:29:08.629615 systemd[1]: Finished verity-setup.service. Apr 12 18:29:08.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:08.689702 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Apr 12 18:29:08.690188 systemd[1]: Mounted sysusr-usr.mount. Apr 12 18:29:08.694136 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Apr 12 18:29:08.694936 systemd[1]: Starting ignition-setup.service... Apr 12 18:29:08.702147 systemd[1]: Starting parse-ip-for-networkd.service... Apr 12 18:29:08.742407 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 12 18:29:08.742478 kernel: BTRFS info (device sda6): using free space tree Apr 12 18:29:08.746960 kernel: BTRFS info (device sda6): has skinny extents Apr 12 18:29:08.788498 systemd[1]: Finished parse-ip-for-networkd.service. Apr 12 18:29:08.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:08.796000 audit: BPF prog-id=9 op=LOAD Apr 12 18:29:08.797993 systemd[1]: Starting systemd-networkd.service... Apr 12 18:29:08.823733 systemd-networkd[831]: lo: Link UP Apr 12 18:29:08.823745 systemd-networkd[831]: lo: Gained carrier Apr 12 18:29:08.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:08.824467 systemd-networkd[831]: Enumeration completed Apr 12 18:29:08.827290 systemd[1]: Started systemd-networkd.service. Apr 12 18:29:08.832279 systemd-networkd[831]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:29:08.833785 systemd[1]: Reached target network.target. Apr 12 18:29:08.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:08.844066 systemd[1]: Starting iscsiuio.service... Apr 12 18:29:08.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:08.881173 iscsid[841]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:29:08.881173 iscsid[841]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Apr 12 18:29:08.881173 iscsid[841]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Apr 12 18:29:08.881173 iscsid[841]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Apr 12 18:29:08.881173 iscsid[841]: If using hardware iscsi like qla4xxx this message can be ignored. Apr 12 18:29:08.881173 iscsid[841]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:29:08.881173 iscsid[841]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Apr 12 18:29:08.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:08.852117 systemd[1]: Started iscsiuio.service. Apr 12 18:29:08.863767 systemd[1]: Starting iscsid.service... Apr 12 18:29:08.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:08.872265 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 12 18:29:08.872931 systemd[1]: Started iscsid.service. Apr 12 18:29:09.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:08.877746 systemd[1]: Starting dracut-initqueue.service... Apr 12 18:29:09.034505 kernel: kauditd_printk_skb: 17 callbacks suppressed Apr 12 18:29:09.034530 kernel: audit: type=1130 audit(1712946549.004:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:08.926802 systemd[1]: Finished dracut-initqueue.service. Apr 12 18:29:08.936931 systemd[1]: Reached target remote-fs-pre.target. Apr 12 18:29:09.047916 kernel: mlx5_core 032b:00:02.0 enP811s1: Link up Apr 12 18:29:08.947401 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:29:08.959663 systemd[1]: Reached target remote-fs.target. Apr 12 18:29:08.971735 systemd[1]: Starting dracut-pre-mount.service... Apr 12 18:29:08.977048 systemd[1]: Finished ignition-setup.service. Apr 12 18:29:08.989318 systemd[1]: Starting ignition-fetch-offline.service... Apr 12 18:29:08.996652 systemd[1]: Finished dracut-pre-mount.service. Apr 12 18:29:09.087511 kernel: hv_netvsc 000d3a07-6d66-000d-3a07-6d66000d3a07 eth0: Data path switched to VF: enP811s1 Apr 12 18:29:09.087708 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Apr 12 18:29:09.088034 systemd-networkd[831]: enP811s1: Link UP Apr 12 18:29:09.088915 systemd-networkd[831]: eth0: Link UP Apr 12 18:29:09.089286 systemd-networkd[831]: eth0: Gained carrier Apr 12 18:29:09.101247 systemd-networkd[831]: enP811s1: Gained carrier Apr 12 18:29:09.111754 systemd-networkd[831]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 12 18:29:10.591983 systemd-networkd[831]: eth0: Gained IPv6LL Apr 12 18:29:12.105432 ignition[856]: Ignition 2.14.0 Apr 12 18:29:12.105444 ignition[856]: Stage: fetch-offline Apr 12 18:29:12.105500 ignition[856]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:29:12.105526 ignition[856]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:29:12.233641 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:29:12.233834 ignition[856]: parsed url from cmdline: "" Apr 12 18:29:12.233838 ignition[856]: no config URL provided Apr 12 18:29:12.233844 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Apr 12 18:29:12.284092 kernel: audit: type=1130 audit(1712946552.256:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:12.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:12.248150 systemd[1]: Finished ignition-fetch-offline.service. Apr 12 18:29:12.233852 ignition[856]: no config at "/usr/lib/ignition/user.ign" Apr 12 18:29:12.258270 systemd[1]: Starting ignition-fetch.service... Apr 12 18:29:12.233858 ignition[856]: failed to fetch config: resource requires networking Apr 12 18:29:12.234208 ignition[856]: Ignition finished successfully Apr 12 18:29:12.287938 ignition[864]: Ignition 2.14.0 Apr 12 18:29:12.287945 ignition[864]: Stage: fetch Apr 12 18:29:12.288057 ignition[864]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:29:12.288076 ignition[864]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:29:12.300007 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:29:12.300152 ignition[864]: parsed url from cmdline: "" Apr 12 18:29:12.300156 ignition[864]: no config URL provided Apr 12 18:29:12.300161 ignition[864]: reading system config file "/usr/lib/ignition/user.ign" Apr 12 18:29:12.300169 ignition[864]: no config at "/usr/lib/ignition/user.ign" Apr 12 18:29:12.300200 ignition[864]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 12 18:29:12.326262 ignition[864]: GET result: OK Apr 12 18:29:12.326382 ignition[864]: config has been read from IMDS userdata Apr 12 18:29:12.326443 ignition[864]: parsing config with SHA512: e13edbeba2ed7bd91c4493ada6dd08b51f679fe78e83f8c88b9e9f248afae0badff138ea6460faa829604e3a81429ce0cbc6b49ffef1f3d0d3a14631be544705 Apr 12 18:29:12.388058 unknown[864]: fetched base config from "system" Apr 12 18:29:12.388732 ignition[864]: fetch: fetch complete Apr 12 18:29:12.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:12.388069 unknown[864]: fetched base config from "system" Apr 12 18:29:12.420899 kernel: audit: type=1130 audit(1712946552.397:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:12.388737 ignition[864]: fetch: fetch passed Apr 12 18:29:12.388074 unknown[864]: fetched user config from "azure" Apr 12 18:29:12.388776 ignition[864]: Ignition finished successfully Apr 12 18:29:12.393213 systemd[1]: Finished ignition-fetch.service. Apr 12 18:29:12.429182 ignition[870]: Ignition 2.14.0 Apr 12 18:29:12.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:12.398570 systemd[1]: Starting ignition-kargs.service... Apr 12 18:29:12.429189 ignition[870]: Stage: kargs Apr 12 18:29:12.478855 kernel: audit: type=1130 audit(1712946552.447:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:12.438933 systemd[1]: Finished ignition-kargs.service. Apr 12 18:29:12.513718 kernel: audit: type=1130 audit(1712946552.487:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:12.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:12.429312 ignition[870]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:29:12.467127 systemd[1]: Starting ignition-disks.service... Apr 12 18:29:12.429336 ignition[870]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:29:12.483220 systemd[1]: Finished ignition-disks.service. Apr 12 18:29:12.432822 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:29:12.487763 systemd[1]: Reached target initrd-root-device.target. Apr 12 18:29:12.436760 ignition[870]: kargs: kargs passed Apr 12 18:29:12.513701 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:29:12.437030 ignition[870]: Ignition finished successfully Apr 12 18:29:12.518067 systemd[1]: Reached target local-fs.target. Apr 12 18:29:12.474313 ignition[876]: Ignition 2.14.0 Apr 12 18:29:12.529286 systemd[1]: Reached target sysinit.target. Apr 12 18:29:12.474320 ignition[876]: Stage: disks Apr 12 18:29:12.537866 systemd[1]: Reached target basic.target. Apr 12 18:29:12.474434 ignition[876]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:29:12.551673 systemd[1]: Starting systemd-fsck-root.service... Apr 12 18:29:12.474451 ignition[876]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:29:12.477584 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:29:12.480309 ignition[876]: disks: disks passed Apr 12 18:29:12.480384 ignition[876]: Ignition finished successfully Apr 12 18:29:12.620385 systemd-fsck[884]: ROOT: clean, 612/7326000 files, 481074/7359488 blocks Apr 12 18:29:12.630921 systemd[1]: Finished systemd-fsck-root.service. Apr 12 18:29:12.656241 kernel: audit: type=1130 audit(1712946552.635:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:12.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:12.654864 systemd[1]: Mounting sysroot.mount... Apr 12 18:29:12.681715 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Apr 12 18:29:12.681961 systemd[1]: Mounted sysroot.mount. Apr 12 18:29:12.685796 systemd[1]: Reached target initrd-root-fs.target. Apr 12 18:29:12.725003 systemd[1]: Mounting sysroot-usr.mount... Apr 12 18:29:12.731149 systemd[1]: Starting flatcar-metadata-hostname.service... Apr 12 18:29:12.740463 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 12 18:29:12.740497 systemd[1]: Reached target ignition-diskful.target. Apr 12 18:29:12.746885 systemd[1]: Mounted sysroot-usr.mount. Apr 12 18:29:12.837110 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:29:12.842266 systemd[1]: Starting initrd-setup-root.service... Apr 12 18:29:12.864712 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (895) Apr 12 18:29:12.874570 initrd-setup-root[900]: cut: /sysroot/etc/passwd: No such file or directory Apr 12 18:29:12.890422 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 12 18:29:12.890447 kernel: BTRFS info (device sda6): using free space tree Apr 12 18:29:12.890464 kernel: BTRFS info (device sda6): has skinny extents Apr 12 18:29:12.894627 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:29:12.923642 initrd-setup-root[926]: cut: /sysroot/etc/group: No such file or directory Apr 12 18:29:12.959769 initrd-setup-root[934]: cut: /sysroot/etc/shadow: No such file or directory Apr 12 18:29:12.968967 initrd-setup-root[942]: cut: /sysroot/etc/gshadow: No such file or directory Apr 12 18:29:13.609584 systemd[1]: Finished initrd-setup-root.service. Apr 12 18:29:13.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:13.615381 systemd[1]: Starting ignition-mount.service... Apr 12 18:29:13.645542 kernel: audit: type=1130 audit(1712946553.614:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:13.644909 systemd[1]: Starting sysroot-boot.service... Apr 12 18:29:13.655243 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Apr 12 18:29:13.655404 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Apr 12 18:29:13.682604 systemd[1]: Finished sysroot-boot.service. Apr 12 18:29:13.711892 kernel: audit: type=1130 audit(1712946553.687:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:13.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:13.846526 ignition[964]: INFO : Ignition 2.14.0 Apr 12 18:29:13.846526 ignition[964]: INFO : Stage: mount Apr 12 18:29:13.858983 ignition[964]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:29:13.858983 ignition[964]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:29:13.858983 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:29:13.858983 ignition[964]: INFO : mount: mount passed Apr 12 18:29:13.858983 ignition[964]: INFO : Ignition finished successfully Apr 12 18:29:13.924883 kernel: audit: type=1130 audit(1712946553.871:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:13.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:13.859888 systemd[1]: Finished ignition-mount.service. Apr 12 18:29:14.476927 coreos-metadata[894]: Apr 12 18:29:14.476 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 12 18:29:14.488261 coreos-metadata[894]: Apr 12 18:29:14.488 INFO Fetch successful Apr 12 18:29:14.522368 coreos-metadata[894]: Apr 12 18:29:14.522 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 12 18:29:14.548462 coreos-metadata[894]: Apr 12 18:29:14.548 INFO Fetch successful Apr 12 18:29:14.565920 coreos-metadata[894]: Apr 12 18:29:14.565 INFO wrote hostname ci-3510.3.3-a-e21a461a74 to /sysroot/etc/hostname Apr 12 18:29:14.575600 systemd[1]: Finished flatcar-metadata-hostname.service. Apr 12 18:29:14.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.583158 systemd[1]: Starting ignition-files.service... Apr 12 18:29:14.609527 kernel: audit: type=1130 audit(1712946554.581:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:14.608788 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:29:14.628703 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (973) Apr 12 18:29:14.640841 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 12 18:29:14.640854 kernel: BTRFS info (device sda6): using free space tree Apr 12 18:29:14.645350 kernel: BTRFS info (device sda6): has skinny extents Apr 12 18:29:14.650428 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:29:14.664508 ignition[992]: INFO : Ignition 2.14.0 Apr 12 18:29:14.664508 ignition[992]: INFO : Stage: files Apr 12 18:29:14.675197 ignition[992]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:29:14.675197 ignition[992]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:29:14.675197 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:29:14.675197 ignition[992]: DEBUG : files: compiled without relabeling support, skipping Apr 12 18:29:14.675197 ignition[992]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 12 18:29:14.675197 ignition[992]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 12 18:29:14.759544 ignition[992]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 12 18:29:14.767614 ignition[992]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 12 18:29:14.783753 unknown[992]: wrote ssh authorized keys file for user: core Apr 12 18:29:14.790391 ignition[992]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 12 18:29:14.790391 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Apr 12 18:29:14.790391 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Apr 12 18:29:15.091715 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 12 18:29:15.255268 ignition[992]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Apr 12 18:29:15.273911 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Apr 12 18:29:15.273911 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 12 18:29:15.273911 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 12 18:29:15.422227 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 12 18:29:15.644693 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 12 18:29:15.655465 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Apr 12 18:29:15.655465 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Apr 12 18:29:15.898603 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 12 18:29:16.159915 ignition[992]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Apr 12 18:29:16.175852 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Apr 12 18:29:16.175852 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:29:16.175852 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.29.2/bin/linux/arm64/kubectl: attempt #1 Apr 12 18:29:16.391985 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Apr 12 18:29:16.723601 ignition[992]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: b303598f3a65bbc366a7bfb4632d3b5cdd2d41b8a7973de80a99f8b1bb058299b57dc39b17a53eb7a54f1a0479ae4e2093fec675f1baff4613e14e0ed9d65c21 Apr 12 18:29:16.741449 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:29:16.741449 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:29:16.741449 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.29.2/bin/linux/arm64/kubeadm: attempt #1 Apr 12 18:29:16.792094 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Apr 12 18:29:17.086161 ignition[992]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 3e6beeb7794aa002604f0be43af0255e707846760508ebe98006ec72ae8d7a7cf2c14fd52bbcc5084f0e9366b992dc64341b1da646f1ce6e937fb762f880dc15 Apr 12 18:29:17.104882 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:29:17.104882 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:29:17.104882 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.29.2/bin/linux/arm64/kubelet: attempt #1 Apr 12 18:29:17.157804 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Apr 12 18:29:17.812316 ignition[992]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: ded47d757fac0279b1b784756fb54b3a5cb0180ce45833838b00d6d7c87578a985e4627503dd7ff734e5f577cf4752ae7daaa2b68e5934fd4617ea15e995f91b Apr 12 18:29:17.830555 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:29:17.830555 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:29:17.830555 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:29:17.830555 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:29:17.830555 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 12 18:29:18.081261 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 12 18:29:18.161761 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:29:18.171876 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Apr 12 18:29:18.171876 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Apr 12 18:29:18.171876 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:29:18.171876 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:29:18.171876 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:29:18.171876 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:29:18.171876 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:29:18.171876 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:29:18.315462 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:29:18.325617 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:29:18.325617 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Apr 12 18:29:18.325617 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Apr 12 18:29:18.367952 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (995) Apr 12 18:29:18.367984 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2806917025" Apr 12 18:29:18.367984 ignition[992]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2806917025": device or resource busy Apr 12 18:29:18.367984 ignition[992]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2806917025", trying btrfs: device or resource busy Apr 12 18:29:18.367984 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2806917025" Apr 12 18:29:18.367984 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2806917025" Apr 12 18:29:18.367984 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem2806917025" Apr 12 18:29:18.367984 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem2806917025" Apr 12 18:29:18.367984 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Apr 12 18:29:18.367984 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Apr 12 18:29:18.367984 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Apr 12 18:29:18.367984 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3448659326" Apr 12 18:29:18.367984 ignition[992]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3448659326": device or resource busy Apr 12 18:29:18.367984 ignition[992]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3448659326", trying btrfs: device or resource busy Apr 12 18:29:18.367984 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3448659326" Apr 12 18:29:18.612632 kernel: audit: type=1130 audit(1712946558.397:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.612664 kernel: audit: type=1130 audit(1712946558.475:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.612675 kernel: audit: type=1131 audit(1712946558.475:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.612803 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3448659326" Apr 12 18:29:18.612803 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem3448659326" Apr 12 18:29:18.612803 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem3448659326" Apr 12 18:29:18.612803 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Apr 12 18:29:18.612803 ignition[992]: INFO : files: op(18): [started] processing unit "waagent.service" Apr 12 18:29:18.612803 ignition[992]: INFO : files: op(18): [finished] processing unit "waagent.service" Apr 12 18:29:18.612803 ignition[992]: INFO : files: op(19): [started] processing unit "nvidia.service" Apr 12 18:29:18.612803 ignition[992]: INFO : files: op(19): [finished] processing unit "nvidia.service" Apr 12 18:29:18.612803 ignition[992]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Apr 12 18:29:18.612803 ignition[992]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:29:18.612803 ignition[992]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:29:18.612803 ignition[992]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Apr 12 18:29:18.612803 ignition[992]: INFO : files: op(1c): [started] processing unit "prepare-critools.service" Apr 12 18:29:18.612803 ignition[992]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:29:18.612803 ignition[992]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:29:18.612803 ignition[992]: INFO : files: op(1c): [finished] processing unit "prepare-critools.service" Apr 12 18:29:18.612803 ignition[992]: INFO : files: op(1e): [started] processing unit "prepare-helm.service" Apr 12 18:29:18.612803 ignition[992]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:29:18.612803 ignition[992]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:29:18.837925 kernel: audit: type=1130 audit(1712946558.781:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.371507 systemd[1]: mnt-oem3448659326.mount: Deactivated successfully. Apr 12 18:29:18.843452 ignition[992]: INFO : files: op(1e): [finished] processing unit "prepare-helm.service" Apr 12 18:29:18.843452 ignition[992]: INFO : files: op(20): [started] setting preset to enabled for "waagent.service" Apr 12 18:29:18.843452 ignition[992]: INFO : files: op(20): [finished] setting preset to enabled for "waagent.service" Apr 12 18:29:18.843452 ignition[992]: INFO : files: op(21): [started] setting preset to enabled for "nvidia.service" Apr 12 18:29:18.843452 ignition[992]: INFO : files: op(21): [finished] setting preset to enabled for "nvidia.service" Apr 12 18:29:18.843452 ignition[992]: INFO : files: op(22): [started] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:29:18.843452 ignition[992]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:29:18.843452 ignition[992]: INFO : files: op(23): [started] setting preset to enabled for "prepare-critools.service" Apr 12 18:29:18.843452 ignition[992]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-critools.service" Apr 12 18:29:18.843452 ignition[992]: INFO : files: op(24): [started] setting preset to enabled for "prepare-helm.service" Apr 12 18:29:18.843452 ignition[992]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-helm.service" Apr 12 18:29:18.843452 ignition[992]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:29:18.843452 ignition[992]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:29:18.843452 ignition[992]: INFO : files: files passed Apr 12 18:29:18.843452 ignition[992]: INFO : Ignition finished successfully Apr 12 18:29:19.052912 kernel: audit: type=1130 audit(1712946558.862:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.052938 kernel: audit: type=1131 audit(1712946558.888:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.052948 kernel: audit: type=1130 audit(1712946558.963:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.382691 systemd[1]: Finished ignition-files.service. Apr 12 18:29:19.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.077509 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 12 18:29:18.401135 systemd[1]: Starting initrd-setup-root-after-ignition.service... Apr 12 18:29:18.431882 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Apr 12 18:29:19.113401 kernel: audit: type=1131 audit(1712946559.057:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.432737 systemd[1]: Starting ignition-quench.service... Apr 12 18:29:18.448589 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 12 18:29:18.448723 systemd[1]: Finished ignition-quench.service. Apr 12 18:29:18.768826 systemd[1]: Finished initrd-setup-root-after-ignition.service. Apr 12 18:29:18.782429 systemd[1]: Reached target ignition-complete.target. Apr 12 18:29:18.823201 systemd[1]: Starting initrd-parse-etc.service... Apr 12 18:29:18.852160 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 12 18:29:18.852269 systemd[1]: Finished initrd-parse-etc.service. Apr 12 18:29:18.889457 systemd[1]: Reached target initrd-fs.target. Apr 12 18:29:18.901010 systemd[1]: Reached target initrd.target. Apr 12 18:29:19.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.928805 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Apr 12 18:29:18.938021 systemd[1]: Starting dracut-pre-pivot.service... Apr 12 18:29:18.958119 systemd[1]: Finished dracut-pre-pivot.service. Apr 12 18:29:19.243442 kernel: audit: type=1131 audit(1712946559.198:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:18.998827 systemd[1]: Starting initrd-cleanup.service... Apr 12 18:29:19.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.019981 systemd[1]: Stopped target nss-lookup.target. Apr 12 18:29:19.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.025500 systemd[1]: Stopped target remote-cryptsetup.target. Apr 12 18:29:19.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.040156 systemd[1]: Stopped target timers.target. Apr 12 18:29:19.048330 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 12 18:29:19.048395 systemd[1]: Stopped dracut-pre-pivot.service. Apr 12 18:29:19.291183 ignition[1030]: INFO : Ignition 2.14.0 Apr 12 18:29:19.291183 ignition[1030]: INFO : Stage: umount Apr 12 18:29:19.291183 ignition[1030]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:29:19.291183 ignition[1030]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Apr 12 18:29:19.291183 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 12 18:29:19.291183 ignition[1030]: INFO : umount: umount passed Apr 12 18:29:19.291183 ignition[1030]: INFO : Ignition finished successfully Apr 12 18:29:19.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.367200 iscsid[841]: iscsid shutting down. Apr 12 18:29:19.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.080220 systemd[1]: Stopped target initrd.target. Apr 12 18:29:19.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.094266 systemd[1]: Stopped target basic.target. Apr 12 18:29:19.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.103282 systemd[1]: Stopped target ignition-complete.target. Apr 12 18:29:19.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.117955 systemd[1]: Stopped target ignition-diskful.target. Apr 12 18:29:19.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.128120 systemd[1]: Stopped target initrd-root-device.target. Apr 12 18:29:19.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.138159 systemd[1]: Stopped target remote-fs.target. Apr 12 18:29:19.146196 systemd[1]: Stopped target remote-fs-pre.target. Apr 12 18:29:19.155790 systemd[1]: Stopped target sysinit.target. Apr 12 18:29:19.163954 systemd[1]: Stopped target local-fs.target. Apr 12 18:29:19.173192 systemd[1]: Stopped target local-fs-pre.target. Apr 12 18:29:19.182798 systemd[1]: Stopped target swap.target. Apr 12 18:29:19.190524 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 12 18:29:19.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.190594 systemd[1]: Stopped dracut-pre-mount.service. Apr 12 18:29:19.219420 systemd[1]: Stopped target cryptsetup.target. Apr 12 18:29:19.231014 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 12 18:29:19.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.231074 systemd[1]: Stopped dracut-initqueue.service. Apr 12 18:29:19.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.239738 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 12 18:29:19.239781 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Apr 12 18:29:19.249513 systemd[1]: ignition-files.service: Deactivated successfully. Apr 12 18:29:19.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.249552 systemd[1]: Stopped ignition-files.service. Apr 12 18:29:19.257514 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 12 18:29:19.257553 systemd[1]: Stopped flatcar-metadata-hostname.service. Apr 12 18:29:19.270384 systemd[1]: Stopping ignition-mount.service... Apr 12 18:29:19.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.287929 systemd[1]: Stopping iscsid.service... Apr 12 18:29:19.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.570000 audit: BPF prog-id=6 op=UNLOAD Apr 12 18:29:19.298900 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 12 18:29:19.298988 systemd[1]: Stopped kmod-static-nodes.service. Apr 12 18:29:19.619482 kernel: kauditd_printk_skb: 22 callbacks suppressed Apr 12 18:29:19.619508 kernel: audit: type=1131 audit(1712946559.593:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.307412 systemd[1]: Stopping sysroot-boot.service... Apr 12 18:29:19.642265 kernel: audit: type=1131 audit(1712946559.623:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.332271 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 12 18:29:19.332370 systemd[1]: Stopped systemd-udev-trigger.service. Apr 12 18:29:19.673194 kernel: audit: type=1131 audit(1712946559.653:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.347835 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 12 18:29:19.347892 systemd[1]: Stopped dracut-pre-trigger.service. Apr 12 18:29:19.353125 systemd[1]: iscsid.service: Deactivated successfully. Apr 12 18:29:19.353245 systemd[1]: Stopped iscsid.service. Apr 12 18:29:19.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.362360 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 12 18:29:19.726876 kernel: audit: type=1131 audit(1712946559.698:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.362463 systemd[1]: Finished initrd-cleanup.service. Apr 12 18:29:19.372322 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 12 18:29:19.748650 kernel: hv_netvsc 000d3a07-6d66-000d-3a07-6d66000d3a07 eth0: Data path switched from VF: enP811s1 Apr 12 18:29:19.372403 systemd[1]: Stopped ignition-mount.service. Apr 12 18:29:19.380881 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 12 18:29:19.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.380939 systemd[1]: Stopped ignition-disks.service. Apr 12 18:29:19.791739 kernel: audit: type=1131 audit(1712946559.762:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.388529 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 12 18:29:19.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.388583 systemd[1]: Stopped ignition-kargs.service. Apr 12 18:29:19.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.396246 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 12 18:29:19.849428 kernel: audit: type=1131 audit(1712946559.795:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.849450 kernel: audit: type=1131 audit(1712946559.821:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.396288 systemd[1]: Stopped ignition-fetch.service. Apr 12 18:29:19.404793 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 12 18:29:19.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.404835 systemd[1]: Stopped ignition-fetch-offline.service. Apr 12 18:29:19.413798 systemd[1]: Stopped target paths.target. Apr 12 18:29:19.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.423062 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 12 18:29:19.940622 kernel: audit: type=1131 audit(1712946559.862:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.940644 kernel: audit: type=1130 audit(1712946559.895:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.940654 kernel: audit: type=1131 audit(1712946559.895:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.426705 systemd[1]: Stopped systemd-ask-password-console.path. Apr 12 18:29:19.432103 systemd[1]: Stopped target slices.target. Apr 12 18:29:19.440124 systemd[1]: Stopped target sockets.target. Apr 12 18:29:19.449619 systemd[1]: iscsid.socket: Deactivated successfully. Apr 12 18:29:19.449670 systemd[1]: Closed iscsid.socket. Apr 12 18:29:19.461477 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 12 18:29:19.461540 systemd[1]: Stopped ignition-setup.service. Apr 12 18:29:19.470070 systemd[1]: Stopping iscsiuio.service... Apr 12 18:29:19.484929 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 12 18:29:19.485417 systemd[1]: iscsiuio.service: Deactivated successfully. Apr 12 18:29:19.485518 systemd[1]: Stopped iscsiuio.service. Apr 12 18:29:19.492019 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 12 18:29:19.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:19.492114 systemd[1]: Stopped sysroot-boot.service. Apr 12 18:29:19.502054 systemd[1]: Stopped target network.target. Apr 12 18:29:19.509927 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 12 18:29:19.509971 systemd[1]: Closed iscsiuio.socket. Apr 12 18:29:19.519724 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 12 18:29:19.519773 systemd[1]: Stopped initrd-setup-root.service. Apr 12 18:29:19.529291 systemd[1]: Stopping systemd-networkd.service... Apr 12 18:29:20.049855 systemd-journald[236]: Received SIGTERM from PID 1 (n/a). Apr 12 18:29:19.539033 systemd[1]: Stopping systemd-resolved.service... Apr 12 18:29:19.546730 systemd-networkd[831]: eth0: DHCPv6 lease lost Apr 12 18:29:20.049000 audit: BPF prog-id=9 op=UNLOAD Apr 12 18:29:19.552624 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 12 18:29:19.552749 systemd[1]: Stopped systemd-resolved.service. Apr 12 18:29:19.562419 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 12 18:29:19.562520 systemd[1]: Stopped systemd-networkd.service. Apr 12 18:29:19.571242 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 12 18:29:19.571287 systemd[1]: Closed systemd-networkd.socket. Apr 12 18:29:19.580371 systemd[1]: Stopping network-cleanup.service... Apr 12 18:29:19.588931 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 12 18:29:19.588999 systemd[1]: Stopped parse-ip-for-networkd.service. Apr 12 18:29:19.593764 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:29:19.593817 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:29:19.643906 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 12 18:29:19.643959 systemd[1]: Stopped systemd-modules-load.service. Apr 12 18:29:19.676090 systemd[1]: Stopping systemd-udevd.service... Apr 12 18:29:19.686386 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 12 18:29:19.691242 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 12 18:29:19.691400 systemd[1]: Stopped systemd-udevd.service. Apr 12 18:29:19.722022 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 12 18:29:19.722069 systemd[1]: Closed systemd-udevd-control.socket. Apr 12 18:29:19.731753 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 12 18:29:19.731797 systemd[1]: Closed systemd-udevd-kernel.socket. Apr 12 18:29:19.754292 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 12 18:29:19.754360 systemd[1]: Stopped dracut-pre-udev.service. Apr 12 18:29:19.787899 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 12 18:29:19.787959 systemd[1]: Stopped dracut-cmdline.service. Apr 12 18:29:19.796208 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 12 18:29:19.796255 systemd[1]: Stopped dracut-cmdline-ask.service. Apr 12 18:29:19.825743 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Apr 12 18:29:19.853969 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 12 18:29:19.854058 systemd[1]: Stopped systemd-vconsole-setup.service. Apr 12 18:29:19.886367 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 12 18:29:19.886482 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Apr 12 18:29:19.987296 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 12 18:29:19.987404 systemd[1]: Stopped network-cleanup.service. Apr 12 18:29:19.994715 systemd[1]: Reached target initrd-switch-root.target. Apr 12 18:29:20.005980 systemd[1]: Starting initrd-switch-root.service... Apr 12 18:29:20.022647 systemd[1]: Switching root. Apr 12 18:29:20.050919 systemd-journald[236]: Journal stopped Apr 12 18:29:34.955028 kernel: SELinux: Class mctp_socket not defined in policy. Apr 12 18:29:34.955049 kernel: SELinux: Class anon_inode not defined in policy. Apr 12 18:29:34.955059 kernel: SELinux: the above unknown classes and permissions will be allowed Apr 12 18:29:34.955069 kernel: SELinux: policy capability network_peer_controls=1 Apr 12 18:29:34.955078 kernel: SELinux: policy capability open_perms=1 Apr 12 18:29:34.955086 kernel: SELinux: policy capability extended_socket_class=1 Apr 12 18:29:34.955095 kernel: SELinux: policy capability always_check_network=0 Apr 12 18:29:34.955103 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 12 18:29:34.955111 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 12 18:29:34.955120 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 12 18:29:34.955129 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 12 18:29:34.955138 systemd[1]: Successfully loaded SELinux policy in 334.062ms. Apr 12 18:29:34.955148 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 31.664ms. Apr 12 18:29:34.955158 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:29:34.955170 systemd[1]: Detected virtualization microsoft. Apr 12 18:29:34.955179 systemd[1]: Detected architecture arm64. Apr 12 18:29:34.955188 systemd[1]: Detected first boot. Apr 12 18:29:34.955197 systemd[1]: Hostname set to . Apr 12 18:29:34.955206 systemd[1]: Initializing machine ID from random generator. Apr 12 18:29:34.955215 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Apr 12 18:29:34.955224 kernel: kauditd_printk_skb: 9 callbacks suppressed Apr 12 18:29:34.955233 kernel: audit: type=1400 audit(1712946565.451:88): avc: denied { associate } for pid=1064 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Apr 12 18:29:34.955245 kernel: audit: type=1300 audit(1712946565.451:88): arch=c00000b7 syscall=5 success=yes exit=0 a0=40000227ec a1=4000028ac8 a2=4000026d00 a3=32 items=0 ppid=1047 pid=1064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:29:34.955254 kernel: audit: type=1327 audit(1712946565.451:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:29:34.955264 kernel: audit: type=1400 audit(1712946565.465:89): avc: denied { associate } for pid=1064 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Apr 12 18:29:34.955274 kernel: audit: type=1300 audit(1712946565.465:89): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000228c9 a2=1ed a3=0 items=2 ppid=1047 pid=1064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:29:34.955283 kernel: audit: type=1307 audit(1712946565.465:89): cwd="/" Apr 12 18:29:34.955293 kernel: audit: type=1302 audit(1712946565.465:89): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:34.955302 kernel: audit: type=1302 audit(1712946565.465:89): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:34.955312 kernel: audit: type=1327 audit(1712946565.465:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:29:34.955321 systemd[1]: Populated /etc with preset unit settings. Apr 12 18:29:34.955330 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:29:34.955339 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:29:34.955349 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:29:34.955359 kernel: audit: type=1334 audit(1712946574.194:90): prog-id=12 op=LOAD Apr 12 18:29:34.955368 kernel: audit: type=1334 audit(1712946574.194:91): prog-id=3 op=UNLOAD Apr 12 18:29:34.955376 kernel: audit: type=1334 audit(1712946574.194:92): prog-id=13 op=LOAD Apr 12 18:29:34.955384 kernel: audit: type=1334 audit(1712946574.194:93): prog-id=14 op=LOAD Apr 12 18:29:34.955394 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 12 18:29:34.955403 kernel: audit: type=1334 audit(1712946574.194:94): prog-id=4 op=UNLOAD Apr 12 18:29:34.955414 systemd[1]: Stopped initrd-switch-root.service. Apr 12 18:29:34.955424 kernel: audit: type=1334 audit(1712946574.194:95): prog-id=5 op=UNLOAD Apr 12 18:29:34.955433 kernel: audit: type=1334 audit(1712946574.195:96): prog-id=15 op=LOAD Apr 12 18:29:34.955442 kernel: audit: type=1334 audit(1712946574.195:97): prog-id=12 op=UNLOAD Apr 12 18:29:34.955451 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 12 18:29:34.955460 kernel: audit: type=1334 audit(1712946574.195:98): prog-id=16 op=LOAD Apr 12 18:29:34.955470 systemd[1]: Created slice system-addon\x2dconfig.slice. Apr 12 18:29:34.955479 kernel: audit: type=1334 audit(1712946574.195:99): prog-id=17 op=LOAD Apr 12 18:29:34.955488 systemd[1]: Created slice system-addon\x2drun.slice. Apr 12 18:29:34.955499 systemd[1]: Created slice system-getty.slice. Apr 12 18:29:34.955508 systemd[1]: Created slice system-modprobe.slice. Apr 12 18:29:34.955518 systemd[1]: Created slice system-serial\x2dgetty.slice. Apr 12 18:29:34.955527 systemd[1]: Created slice system-system\x2dcloudinit.slice. Apr 12 18:29:34.955536 systemd[1]: Created slice system-systemd\x2dfsck.slice. Apr 12 18:29:34.955545 systemd[1]: Created slice user.slice. Apr 12 18:29:34.955555 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:29:34.955564 systemd[1]: Started systemd-ask-password-wall.path. Apr 12 18:29:34.955573 systemd[1]: Set up automount boot.automount. Apr 12 18:29:34.955583 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Apr 12 18:29:34.955593 systemd[1]: Stopped target initrd-switch-root.target. Apr 12 18:29:34.955602 systemd[1]: Stopped target initrd-fs.target. Apr 12 18:29:34.955611 systemd[1]: Stopped target initrd-root-fs.target. Apr 12 18:29:34.955620 systemd[1]: Reached target integritysetup.target. Apr 12 18:29:34.955629 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:29:34.955639 systemd[1]: Reached target remote-fs.target. Apr 12 18:29:34.955648 systemd[1]: Reached target slices.target. Apr 12 18:29:34.955658 systemd[1]: Reached target swap.target. Apr 12 18:29:34.955669 systemd[1]: Reached target torcx.target. Apr 12 18:29:34.955687 systemd[1]: Reached target veritysetup.target. Apr 12 18:29:34.955697 systemd[1]: Listening on systemd-coredump.socket. Apr 12 18:29:34.955706 systemd[1]: Listening on systemd-initctl.socket. Apr 12 18:29:34.955715 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:29:34.955727 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:29:34.955736 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:29:34.955745 systemd[1]: Listening on systemd-userdbd.socket. Apr 12 18:29:34.955755 systemd[1]: Mounting dev-hugepages.mount... Apr 12 18:29:34.955764 systemd[1]: Mounting dev-mqueue.mount... Apr 12 18:29:34.955773 systemd[1]: Mounting media.mount... Apr 12 18:29:34.955782 systemd[1]: Mounting sys-kernel-debug.mount... Apr 12 18:29:34.955792 systemd[1]: Mounting sys-kernel-tracing.mount... Apr 12 18:29:34.955803 systemd[1]: Mounting tmp.mount... Apr 12 18:29:34.955812 systemd[1]: Starting flatcar-tmpfiles.service... Apr 12 18:29:34.955822 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Apr 12 18:29:34.955831 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:29:34.955840 systemd[1]: Starting modprobe@configfs.service... Apr 12 18:29:34.955849 systemd[1]: Starting modprobe@dm_mod.service... Apr 12 18:29:34.955859 systemd[1]: Starting modprobe@drm.service... Apr 12 18:29:34.955868 systemd[1]: Starting modprobe@efi_pstore.service... Apr 12 18:29:34.955878 systemd[1]: Starting modprobe@fuse.service... Apr 12 18:29:34.955889 systemd[1]: Starting modprobe@loop.service... Apr 12 18:29:34.955899 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 12 18:29:34.955909 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 12 18:29:34.955918 systemd[1]: Stopped systemd-fsck-root.service. Apr 12 18:29:34.955928 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 12 18:29:34.955937 systemd[1]: Stopped systemd-fsck-usr.service. Apr 12 18:29:34.955947 systemd[1]: Stopped systemd-journald.service. Apr 12 18:29:34.955956 kernel: fuse: init (API version 7.34) Apr 12 18:29:34.955967 systemd[1]: systemd-journald.service: Consumed 3.493s CPU time. Apr 12 18:29:34.955976 kernel: loop: module loaded Apr 12 18:29:34.955985 systemd[1]: Starting systemd-journald.service... Apr 12 18:29:34.955994 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:29:34.956004 systemd[1]: Starting systemd-network-generator.service... Apr 12 18:29:34.956014 systemd[1]: Starting systemd-remount-fs.service... Apr 12 18:29:34.956023 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:29:34.956032 systemd[1]: verity-setup.service: Deactivated successfully. Apr 12 18:29:34.956042 systemd[1]: Stopped verity-setup.service. Apr 12 18:29:34.956052 systemd[1]: Mounted dev-hugepages.mount. Apr 12 18:29:34.956062 systemd[1]: Mounted dev-mqueue.mount. Apr 12 18:29:34.956071 systemd[1]: Mounted media.mount. Apr 12 18:29:34.956082 systemd[1]: Mounted sys-kernel-debug.mount. Apr 12 18:29:34.956091 systemd[1]: Mounted sys-kernel-tracing.mount. Apr 12 18:29:34.956106 systemd[1]: Mounted tmp.mount. Apr 12 18:29:34.956119 systemd-journald[1170]: Journal started Apr 12 18:29:34.956160 systemd-journald[1170]: Runtime Journal (/run/log/journal/06fe2ada73fc462090ae0506ad5d38cf) is 8.0M, max 78.6M, 70.6M free. Apr 12 18:29:23.164000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 12 18:29:24.047000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:29:24.047000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:29:24.047000 audit: BPF prog-id=10 op=LOAD Apr 12 18:29:24.047000 audit: BPF prog-id=10 op=UNLOAD Apr 12 18:29:24.047000 audit: BPF prog-id=11 op=LOAD Apr 12 18:29:24.047000 audit: BPF prog-id=11 op=UNLOAD Apr 12 18:29:25.451000 audit[1064]: AVC avc: denied { associate } for pid=1064 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Apr 12 18:29:25.451000 audit[1064]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40000227ec a1=4000028ac8 a2=4000026d00 a3=32 items=0 ppid=1047 pid=1064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:29:25.451000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:29:25.465000 audit[1064]: AVC avc: denied { associate } for pid=1064 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Apr 12 18:29:25.465000 audit[1064]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000228c9 a2=1ed a3=0 items=2 ppid=1047 pid=1064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:29:25.465000 audit: CWD cwd="/" Apr 12 18:29:25.465000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:25.465000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:25.465000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:29:34.965225 systemd[1]: Started systemd-journald.service. Apr 12 18:29:34.194000 audit: BPF prog-id=12 op=LOAD Apr 12 18:29:34.194000 audit: BPF prog-id=3 op=UNLOAD Apr 12 18:29:34.194000 audit: BPF prog-id=13 op=LOAD Apr 12 18:29:34.194000 audit: BPF prog-id=14 op=LOAD Apr 12 18:29:34.194000 audit: BPF prog-id=4 op=UNLOAD Apr 12 18:29:34.194000 audit: BPF prog-id=5 op=UNLOAD Apr 12 18:29:34.195000 audit: BPF prog-id=15 op=LOAD Apr 12 18:29:34.195000 audit: BPF prog-id=12 op=UNLOAD Apr 12 18:29:34.195000 audit: BPF prog-id=16 op=LOAD Apr 12 18:29:34.195000 audit: BPF prog-id=17 op=LOAD Apr 12 18:29:34.195000 audit: BPF prog-id=13 op=UNLOAD Apr 12 18:29:34.195000 audit: BPF prog-id=14 op=UNLOAD Apr 12 18:29:34.202000 audit: BPF prog-id=18 op=LOAD Apr 12 18:29:34.202000 audit: BPF prog-id=15 op=UNLOAD Apr 12 18:29:34.208000 audit: BPF prog-id=19 op=LOAD Apr 12 18:29:34.213000 audit: BPF prog-id=20 op=LOAD Apr 12 18:29:34.213000 audit: BPF prog-id=16 op=UNLOAD Apr 12 18:29:34.213000 audit: BPF prog-id=17 op=UNLOAD Apr 12 18:29:34.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:34.249000 audit: BPF prog-id=18 op=UNLOAD Apr 12 18:29:34.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:34.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:34.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:34.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:34.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:34.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:34.815000 audit: BPF prog-id=21 op=LOAD Apr 12 18:29:34.815000 audit: BPF prog-id=22 op=LOAD Apr 12 18:29:34.815000 audit: BPF prog-id=23 op=LOAD Apr 12 18:29:34.815000 audit: BPF prog-id=19 op=UNLOAD Apr 12 18:29:34.815000 audit: BPF prog-id=20 op=UNLOAD Apr 12 18:29:34.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:34.952000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 12 18:29:34.952000 audit[1170]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd99a3720 a2=4000 a3=1 items=0 ppid=1 pid=1170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:29:34.952000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 12 18:29:34.193839 systemd[1]: Queued start job for default target multi-user.target. Apr 12 18:29:25.358143 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:29:34.215329 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 12 18:29:25.375038 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:25Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:29:34.215803 systemd[1]: systemd-journald.service: Consumed 3.493s CPU time. Apr 12 18:29:25.375059 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:25Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:29:25.375097 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:25Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Apr 12 18:29:25.375108 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:25Z" level=debug msg="skipped missing lower profile" missing profile=oem Apr 12 18:29:25.375144 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:25Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Apr 12 18:29:25.375156 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:25Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Apr 12 18:29:25.375363 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:25Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Apr 12 18:29:25.375397 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:25Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:29:25.375409 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:25Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:29:25.418974 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:25Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Apr 12 18:29:25.419015 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:25Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Apr 12 18:29:25.419037 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:25Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.3: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.3 Apr 12 18:29:25.419051 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:25Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Apr 12 18:29:25.419073 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:25Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.3: no such file or directory" path=/var/lib/torcx/store/3510.3.3 Apr 12 18:29:25.419088 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:25Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Apr 12 18:29:32.808204 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:32Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:29:32.808465 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:32Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:29:32.808559 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:32Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:29:32.808738 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:32Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:29:32.808789 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:32Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Apr 12 18:29:32.808845 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-04-12T18:29:32Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Apr 12 18:29:34.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:34.970487 systemd[1]: Finished flatcar-tmpfiles.service. Apr 12 18:29:34.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:34.975561 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:29:34.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:34.982173 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 12 18:29:34.982302 systemd[1]: Finished modprobe@configfs.service. Apr 12 18:29:34.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:34.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:34.987355 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 12 18:29:34.987485 systemd[1]: Finished modprobe@dm_mod.service. Apr 12 18:29:34.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:34.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:34.992454 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 12 18:29:34.992584 systemd[1]: Finished modprobe@drm.service. Apr 12 18:29:34.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:34.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:34.997103 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 12 18:29:34.997228 systemd[1]: Finished modprobe@efi_pstore.service. Apr 12 18:29:35.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:35.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:35.002239 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 12 18:29:35.002369 systemd[1]: Finished modprobe@fuse.service. Apr 12 18:29:35.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:35.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:35.007197 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 12 18:29:35.007322 systemd[1]: Finished modprobe@loop.service. Apr 12 18:29:35.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:35.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:35.012176 systemd[1]: Finished systemd-network-generator.service. Apr 12 18:29:35.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:35.017914 systemd[1]: Finished systemd-remount-fs.service. Apr 12 18:29:35.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:35.023155 systemd[1]: Reached target network-pre.target. Apr 12 18:29:35.030232 systemd[1]: Mounting sys-fs-fuse-connections.mount... Apr 12 18:29:35.036083 systemd[1]: Mounting sys-kernel-config.mount... Apr 12 18:29:35.040382 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 12 18:29:35.056913 systemd[1]: Starting systemd-hwdb-update.service... Apr 12 18:29:35.062423 systemd[1]: Starting systemd-journal-flush.service... Apr 12 18:29:35.067377 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 12 18:29:35.068571 systemd[1]: Starting systemd-random-seed.service... Apr 12 18:29:35.074140 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Apr 12 18:29:35.075443 systemd[1]: Starting systemd-sysusers.service... Apr 12 18:29:35.081528 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:29:35.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:35.087357 systemd[1]: Mounted sys-fs-fuse-connections.mount. Apr 12 18:29:35.092286 systemd[1]: Mounted sys-kernel-config.mount. Apr 12 18:29:35.098450 systemd[1]: Starting systemd-udev-settle.service... Apr 12 18:29:35.107419 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:29:35.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:35.120134 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:29:35.126400 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 12 18:29:35.129229 systemd[1]: Finished systemd-random-seed.service. Apr 12 18:29:35.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:35.134185 systemd[1]: Reached target first-boot-complete.target. Apr 12 18:29:35.164293 systemd-journald[1170]: Time spent on flushing to /var/log/journal/06fe2ada73fc462090ae0506ad5d38cf is 18.002ms for 1152 entries. Apr 12 18:29:35.164293 systemd-journald[1170]: System Journal (/var/log/journal/06fe2ada73fc462090ae0506ad5d38cf) is 8.0M, max 2.6G, 2.6G free. Apr 12 18:29:35.293988 systemd-journald[1170]: Received client request to flush runtime journal. Apr 12 18:29:35.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:35.238789 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:29:35.295001 systemd[1]: Finished systemd-journal-flush.service. Apr 12 18:29:35.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:35.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:35.800040 systemd[1]: Finished systemd-sysusers.service. Apr 12 18:29:36.543623 systemd[1]: Finished systemd-hwdb-update.service. Apr 12 18:29:36.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:36.548000 audit: BPF prog-id=24 op=LOAD Apr 12 18:29:36.548000 audit: BPF prog-id=25 op=LOAD Apr 12 18:29:36.548000 audit: BPF prog-id=7 op=UNLOAD Apr 12 18:29:36.548000 audit: BPF prog-id=8 op=UNLOAD Apr 12 18:29:36.549956 systemd[1]: Starting systemd-udevd.service... Apr 12 18:29:36.568513 systemd-udevd[1187]: Using default interface naming scheme 'v252'. Apr 12 18:29:36.885776 systemd[1]: Started systemd-udevd.service. Apr 12 18:29:36.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:36.895000 audit: BPF prog-id=26 op=LOAD Apr 12 18:29:36.898481 systemd[1]: Starting systemd-networkd.service... Apr 12 18:29:36.967391 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Apr 12 18:29:36.977633 systemd[1]: Starting systemd-userdbd.service... Apr 12 18:29:36.976000 audit: BPF prog-id=27 op=LOAD Apr 12 18:29:36.976000 audit: BPF prog-id=28 op=LOAD Apr 12 18:29:36.976000 audit: BPF prog-id=29 op=LOAD Apr 12 18:29:36.985000 audit[1197]: AVC avc: denied { confidentiality } for pid=1197 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Apr 12 18:29:37.001718 kernel: hv_vmbus: registering driver hv_balloon Apr 12 18:29:37.001819 kernel: mousedev: PS/2 mouse device common for all mice Apr 12 18:29:37.013882 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 12 18:29:37.013978 kernel: hv_balloon: Memory hot add disabled on ARM64 Apr 12 18:29:36.985000 audit[1197]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaab001fd8f0 a1=aa2c a2=ffff9f7624b0 a3=aaab0015b010 items=12 ppid=1187 pid=1197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:29:36.985000 audit: CWD cwd="/" Apr 12 18:29:36.985000 audit: PATH item=0 name=(null) inode=6687 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:36.985000 audit: PATH item=1 name=(null) inode=9844 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:36.985000 audit: PATH item=2 name=(null) inode=9844 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:36.985000 audit: PATH item=3 name=(null) inode=9845 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:36.985000 audit: PATH item=4 name=(null) inode=9844 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:36.985000 audit: PATH item=5 name=(null) inode=9846 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:36.985000 audit: PATH item=6 name=(null) inode=9844 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:36.985000 audit: PATH item=7 name=(null) inode=9847 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:36.985000 audit: PATH item=8 name=(null) inode=9844 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:36.985000 audit: PATH item=9 name=(null) inode=9848 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:36.985000 audit: PATH item=10 name=(null) inode=9844 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:36.985000 audit: PATH item=11 name=(null) inode=9849 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:29:36.985000 audit: PROCTITLE proctitle="(udev-worker)" Apr 12 18:29:37.039797 kernel: hv_vmbus: registering driver hyperv_fb Apr 12 18:29:37.039913 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 12 18:29:37.047547 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 12 18:29:37.054931 kernel: Console: switching to colour dummy device 80x25 Apr 12 18:29:37.063710 kernel: hv_utils: Registering HyperV Utility Driver Apr 12 18:29:37.063787 kernel: Console: switching to colour frame buffer device 128x48 Apr 12 18:29:37.064704 kernel: hv_vmbus: registering driver hv_utils Apr 12 18:29:37.079394 kernel: hv_utils: Heartbeat IC version 3.0 Apr 12 18:29:37.079498 kernel: hv_utils: Shutdown IC version 3.2 Apr 12 18:29:37.083174 kernel: hv_utils: TimeSync IC version 4.0 Apr 12 18:29:37.437010 systemd[1]: Started systemd-userdbd.service. Apr 12 18:29:37.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:37.715092 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1188) Apr 12 18:29:37.736392 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:29:37.744497 systemd[1]: Finished systemd-udev-settle.service. Apr 12 18:29:37.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:37.750393 systemd[1]: Starting lvm2-activation-early.service... Apr 12 18:29:38.041112 systemd-networkd[1207]: lo: Link UP Apr 12 18:29:38.041435 systemd-networkd[1207]: lo: Gained carrier Apr 12 18:29:38.041913 systemd-networkd[1207]: Enumeration completed Apr 12 18:29:38.042116 systemd[1]: Started systemd-networkd.service. Apr 12 18:29:38.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:38.048425 systemd[1]: Starting systemd-networkd-wait-online.service... Apr 12 18:29:38.086615 systemd-networkd[1207]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:29:38.106381 lvm[1265]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:29:38.137078 kernel: mlx5_core 032b:00:02.0 enP811s1: Link up Apr 12 18:29:38.162081 kernel: hv_netvsc 000d3a07-6d66-000d-3a07-6d66000d3a07 eth0: Data path switched to VF: enP811s1 Apr 12 18:29:38.162038 systemd-networkd[1207]: enP811s1: Link UP Apr 12 18:29:38.162145 systemd-networkd[1207]: eth0: Link UP Apr 12 18:29:38.162148 systemd-networkd[1207]: eth0: Gained carrier Apr 12 18:29:38.166311 systemd-networkd[1207]: enP811s1: Gained carrier Apr 12 18:29:38.167619 systemd[1]: Finished lvm2-activation-early.service. Apr 12 18:29:38.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:38.172920 systemd[1]: Reached target cryptsetup.target. Apr 12 18:29:38.178703 systemd[1]: Starting lvm2-activation.service... Apr 12 18:29:38.182942 lvm[1267]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:29:38.184227 systemd-networkd[1207]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 12 18:29:38.206099 systemd[1]: Finished lvm2-activation.service. Apr 12 18:29:38.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:38.210713 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:29:38.215150 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 12 18:29:38.215178 systemd[1]: Reached target local-fs.target. Apr 12 18:29:38.219361 systemd[1]: Reached target machines.target. Apr 12 18:29:38.224761 systemd[1]: Starting ldconfig.service... Apr 12 18:29:38.243805 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Apr 12 18:29:38.243900 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:29:38.245120 systemd[1]: Starting systemd-boot-update.service... Apr 12 18:29:38.250434 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Apr 12 18:29:38.256925 systemd[1]: Starting systemd-machine-id-commit.service... Apr 12 18:29:38.261714 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:29:38.261785 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:29:38.263110 systemd[1]: Starting systemd-tmpfiles-setup.service... Apr 12 18:29:38.311583 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1269 (bootctl) Apr 12 18:29:38.312857 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Apr 12 18:29:38.504251 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Apr 12 18:29:38.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:38.939798 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Apr 12 18:29:39.251864 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 12 18:29:39.252474 systemd[1]: Finished systemd-machine-id-commit.service. Apr 12 18:29:39.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:39.278934 systemd-fsck[1277]: fsck.fat 4.2 (2021-01-31) Apr 12 18:29:39.278934 systemd-fsck[1277]: /dev/sda1: 236 files, 117047/258078 clusters Apr 12 18:29:39.281745 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Apr 12 18:29:39.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:39.290507 systemd[1]: Mounting boot.mount... Apr 12 18:29:39.293777 systemd-networkd[1207]: eth0: Gained IPv6LL Apr 12 18:29:39.299410 systemd[1]: Finished systemd-networkd-wait-online.service. Apr 12 18:29:39.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:39.308518 systemd[1]: Mounted boot.mount. Apr 12 18:29:39.318953 systemd[1]: Finished systemd-boot-update.service. Apr 12 18:29:39.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:39.346876 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 12 18:29:39.467512 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 12 18:29:40.073628 systemd[1]: Finished systemd-tmpfiles-setup.service. Apr 12 18:29:40.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:40.083541 kernel: kauditd_printk_skb: 84 callbacks suppressed Apr 12 18:29:40.083610 kernel: audit: type=1130 audit(1712946580.078:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:40.084926 systemd[1]: Starting audit-rules.service... Apr 12 18:29:40.104960 systemd[1]: Starting clean-ca-certificates.service... Apr 12 18:29:40.110493 systemd[1]: Starting systemd-journal-catalog-update.service... Apr 12 18:29:40.115000 audit: BPF prog-id=30 op=LOAD Apr 12 18:29:40.121147 systemd[1]: Starting systemd-resolved.service... Apr 12 18:29:40.123095 kernel: audit: type=1334 audit(1712946580.115:168): prog-id=30 op=LOAD Apr 12 18:29:40.126000 audit: BPF prog-id=31 op=LOAD Apr 12 18:29:40.128795 systemd[1]: Starting systemd-timesyncd.service... Apr 12 18:29:40.136689 kernel: audit: type=1334 audit(1712946580.126:169): prog-id=31 op=LOAD Apr 12 18:29:40.138540 systemd[1]: Starting systemd-update-utmp.service... Apr 12 18:29:40.167205 systemd[1]: Finished clean-ca-certificates.service. Apr 12 18:29:40.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:40.190543 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 12 18:29:40.195165 kernel: audit: type=1130 audit(1712946580.172:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:40.195000 audit[1289]: SYSTEM_BOOT pid=1289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 12 18:29:40.214549 systemd[1]: Finished systemd-update-utmp.service. Apr 12 18:29:40.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:40.236884 kernel: audit: type=1127 audit(1712946580.195:171): pid=1289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 12 18:29:40.237497 kernel: audit: type=1130 audit(1712946580.218:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:40.261352 systemd[1]: Finished systemd-journal-catalog-update.service. Apr 12 18:29:40.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:40.273347 systemd[1]: Started systemd-timesyncd.service. Apr 12 18:29:40.291121 kernel: audit: type=1130 audit(1712946580.266:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:40.291237 kernel: audit: type=1130 audit(1712946580.289:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:40.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:40.290639 systemd[1]: Reached target time-set.target. Apr 12 18:29:40.346524 systemd-resolved[1287]: Positive Trust Anchors: Apr 12 18:29:40.346538 systemd-resolved[1287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:29:40.346569 systemd-resolved[1287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:29:40.350226 systemd-resolved[1287]: Using system hostname 'ci-3510.3.3-a-e21a461a74'. Apr 12 18:29:40.351650 systemd[1]: Started systemd-resolved.service. Apr 12 18:29:40.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:40.356348 systemd[1]: Reached target network.target. Apr 12 18:29:40.378438 kernel: audit: type=1130 audit(1712946580.355:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:29:40.379141 systemd[1]: Reached target network-online.target. Apr 12 18:29:40.384425 systemd[1]: Reached target nss-lookup.target. Apr 12 18:29:40.625953 systemd-timesyncd[1288]: Contacted time server 137.190.2.4:123 (0.flatcar.pool.ntp.org). Apr 12 18:29:40.626411 systemd-timesyncd[1288]: Initial clock synchronization to Fri 2024-04-12 18:29:40.622822 UTC. Apr 12 18:29:40.679784 augenrules[1304]: No rules Apr 12 18:29:40.678000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 18:29:40.691636 systemd[1]: Finished audit-rules.service. Apr 12 18:29:40.678000 audit[1304]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcde1e840 a2=420 a3=0 items=0 ppid=1283 pid=1304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:29:40.678000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 12 18:29:40.692082 kernel: audit: type=1305 audit(1712946580.678:176): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 18:29:48.629767 ldconfig[1268]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 12 18:29:48.641514 systemd[1]: Finished ldconfig.service. Apr 12 18:29:48.647671 systemd[1]: Starting systemd-update-done.service... Apr 12 18:29:48.689941 systemd[1]: Finished systemd-update-done.service. Apr 12 18:29:48.695072 systemd[1]: Reached target sysinit.target. Apr 12 18:29:48.699557 systemd[1]: Started motdgen.path. Apr 12 18:29:48.703504 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Apr 12 18:29:48.709937 systemd[1]: Started logrotate.timer. Apr 12 18:29:48.713977 systemd[1]: Started mdadm.timer. Apr 12 18:29:48.717635 systemd[1]: Started systemd-tmpfiles-clean.timer. Apr 12 18:29:48.722302 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 12 18:29:48.722333 systemd[1]: Reached target paths.target. Apr 12 18:29:48.726407 systemd[1]: Reached target timers.target. Apr 12 18:29:48.731082 systemd[1]: Listening on dbus.socket. Apr 12 18:29:48.736197 systemd[1]: Starting docker.socket... Apr 12 18:29:48.773824 systemd[1]: Listening on sshd.socket. Apr 12 18:29:48.777800 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:29:48.778296 systemd[1]: Listening on docker.socket. Apr 12 18:29:48.782210 systemd[1]: Reached target sockets.target. Apr 12 18:29:48.786365 systemd[1]: Reached target basic.target. Apr 12 18:29:48.790267 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:29:48.790294 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:29:48.791399 systemd[1]: Starting containerd.service... Apr 12 18:29:48.796329 systemd[1]: Starting dbus.service... Apr 12 18:29:48.800717 systemd[1]: Starting enable-oem-cloudinit.service... Apr 12 18:29:48.806209 systemd[1]: Starting extend-filesystems.service... Apr 12 18:29:48.812853 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Apr 12 18:29:48.814117 systemd[1]: Starting motdgen.service... Apr 12 18:29:48.818722 systemd[1]: Started nvidia.service. Apr 12 18:29:48.823715 systemd[1]: Starting prepare-cni-plugins.service... Apr 12 18:29:48.828942 systemd[1]: Starting prepare-critools.service... Apr 12 18:29:48.834104 systemd[1]: Starting prepare-helm.service... Apr 12 18:29:48.838952 systemd[1]: Starting ssh-key-proc-cmdline.service... Apr 12 18:29:48.844188 systemd[1]: Starting sshd-keygen.service... Apr 12 18:29:48.849917 systemd[1]: Starting systemd-logind.service... Apr 12 18:29:48.855778 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:29:48.855836 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 12 18:29:48.856336 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 12 18:29:48.857090 systemd[1]: Starting update-engine.service... Apr 12 18:29:48.862793 systemd[1]: Starting update-ssh-keys-after-ignition.service... Apr 12 18:29:48.872615 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 12 18:29:48.872806 systemd[1]: Finished ssh-key-proc-cmdline.service. Apr 12 18:29:48.896525 jq[1332]: true Apr 12 18:29:48.897207 jq[1314]: false Apr 12 18:29:48.907391 extend-filesystems[1315]: Found sda Apr 12 18:29:48.912315 extend-filesystems[1315]: Found sda1 Apr 12 18:29:48.912315 extend-filesystems[1315]: Found sda2 Apr 12 18:29:48.912315 extend-filesystems[1315]: Found sda3 Apr 12 18:29:48.912315 extend-filesystems[1315]: Found usr Apr 12 18:29:48.912315 extend-filesystems[1315]: Found sda4 Apr 12 18:29:48.912315 extend-filesystems[1315]: Found sda6 Apr 12 18:29:48.912315 extend-filesystems[1315]: Found sda7 Apr 12 18:29:48.912315 extend-filesystems[1315]: Found sda9 Apr 12 18:29:48.912315 extend-filesystems[1315]: Checking size of /dev/sda9 Apr 12 18:29:48.935503 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 12 18:29:48.935686 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Apr 12 18:29:48.942078 systemd[1]: motdgen.service: Deactivated successfully. Apr 12 18:29:48.942245 systemd[1]: Finished motdgen.service. Apr 12 18:29:48.968784 jq[1342]: true Apr 12 18:29:48.992935 env[1340]: time="2024-04-12T18:29:48.992882986Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Apr 12 18:29:49.011446 tar[1335]: ./ Apr 12 18:29:49.011446 tar[1335]: ./loopback Apr 12 18:29:49.011746 tar[1336]: crictl Apr 12 18:29:49.011871 tar[1337]: linux-arm64/helm Apr 12 18:29:49.001031 systemd-logind[1326]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Apr 12 18:29:49.005859 systemd-logind[1326]: New seat seat0. Apr 12 18:29:49.037917 extend-filesystems[1315]: Old size kept for /dev/sda9 Apr 12 18:29:49.044401 extend-filesystems[1315]: Found sr0 Apr 12 18:29:49.062419 env[1340]: time="2024-04-12T18:29:49.043615839Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 12 18:29:49.062419 env[1340]: time="2024-04-12T18:29:49.043765782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:29:49.062419 env[1340]: time="2024-04-12T18:29:49.051297652Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.154-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:29:49.062419 env[1340]: time="2024-04-12T18:29:49.051396921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:29:49.062419 env[1340]: time="2024-04-12T18:29:49.051740122Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:29:49.062419 env[1340]: time="2024-04-12T18:29:49.051761839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 12 18:29:49.062419 env[1340]: time="2024-04-12T18:29:49.051775718Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Apr 12 18:29:49.062419 env[1340]: time="2024-04-12T18:29:49.051785757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 12 18:29:49.062419 env[1340]: time="2024-04-12T18:29:49.051921142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:29:49.062419 env[1340]: time="2024-04-12T18:29:49.052214668Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:29:49.038565 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 12 18:29:49.062905 env[1340]: time="2024-04-12T18:29:49.052404167Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:29:49.062905 env[1340]: time="2024-04-12T18:29:49.052425725Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 12 18:29:49.062905 env[1340]: time="2024-04-12T18:29:49.052484678Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Apr 12 18:29:49.062905 env[1340]: time="2024-04-12T18:29:49.052497037Z" level=info msg="metadata content store policy set" policy=shared Apr 12 18:29:49.038743 systemd[1]: Finished extend-filesystems.service. Apr 12 18:29:49.082360 env[1340]: time="2024-04-12T18:29:49.081461528Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 12 18:29:49.082360 env[1340]: time="2024-04-12T18:29:49.081511162Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 12 18:29:49.082360 env[1340]: time="2024-04-12T18:29:49.081527520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 12 18:29:49.082360 env[1340]: time="2024-04-12T18:29:49.081570076Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 12 18:29:49.082360 env[1340]: time="2024-04-12T18:29:49.081586794Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 12 18:29:49.082360 env[1340]: time="2024-04-12T18:29:49.081601072Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 12 18:29:49.082360 env[1340]: time="2024-04-12T18:29:49.081614431Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 12 18:29:49.082360 env[1340]: time="2024-04-12T18:29:49.082017185Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 12 18:29:49.082360 env[1340]: time="2024-04-12T18:29:49.082039903Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Apr 12 18:29:49.082360 env[1340]: time="2024-04-12T18:29:49.082052661Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 12 18:29:49.082360 env[1340]: time="2024-04-12T18:29:49.082088577Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 12 18:29:49.082360 env[1340]: time="2024-04-12T18:29:49.082102176Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 12 18:29:49.082360 env[1340]: time="2024-04-12T18:29:49.082239080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 12 18:29:49.082360 env[1340]: time="2024-04-12T18:29:49.082308552Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 12 18:29:49.082968 env[1340]: time="2024-04-12T18:29:49.082939201Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 12 18:29:49.083050 env[1340]: time="2024-04-12T18:29:49.083036190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 12 18:29:49.083130 env[1340]: time="2024-04-12T18:29:49.083116901Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 12 18:29:49.083242 env[1340]: time="2024-04-12T18:29:49.083227489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 12 18:29:49.083369 env[1340]: time="2024-04-12T18:29:49.083354394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 12 18:29:49.083435 env[1340]: time="2024-04-12T18:29:49.083422307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 12 18:29:49.083493 env[1340]: time="2024-04-12T18:29:49.083480220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 12 18:29:49.083546 env[1340]: time="2024-04-12T18:29:49.083534494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 12 18:29:49.083606 env[1340]: time="2024-04-12T18:29:49.083594007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 12 18:29:49.083669 env[1340]: time="2024-04-12T18:29:49.083656000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 12 18:29:49.083803 env[1340]: time="2024-04-12T18:29:49.083787825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 12 18:29:49.083879 env[1340]: time="2024-04-12T18:29:49.083865377Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 12 18:29:49.084150 env[1340]: time="2024-04-12T18:29:49.084130667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 12 18:29:49.084309 env[1340]: time="2024-04-12T18:29:49.084292848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 12 18:29:49.084383 env[1340]: time="2024-04-12T18:29:49.084369880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 12 18:29:49.084440 env[1340]: time="2024-04-12T18:29:49.084428393Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 12 18:29:49.084500 env[1340]: time="2024-04-12T18:29:49.084485827Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Apr 12 18:29:49.084551 env[1340]: time="2024-04-12T18:29:49.084539381Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 12 18:29:49.084612 env[1340]: time="2024-04-12T18:29:49.084598814Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Apr 12 18:29:49.084699 env[1340]: time="2024-04-12T18:29:49.084685684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 12 18:29:49.084989 env[1340]: time="2024-04-12T18:29:49.084935776Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 12 18:29:49.102623 env[1340]: time="2024-04-12T18:29:49.085208865Z" level=info msg="Connect containerd service" Apr 12 18:29:49.102623 env[1340]: time="2024-04-12T18:29:49.085255900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 12 18:29:49.102623 env[1340]: time="2024-04-12T18:29:49.086020093Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:29:49.102623 env[1340]: time="2024-04-12T18:29:49.086368414Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 12 18:29:49.102623 env[1340]: time="2024-04-12T18:29:49.086405050Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 12 18:29:49.102623 env[1340]: time="2024-04-12T18:29:49.088921166Z" level=info msg="Start subscribing containerd event" Apr 12 18:29:49.102623 env[1340]: time="2024-04-12T18:29:49.088981799Z" level=info msg="Start recovering state" Apr 12 18:29:49.102623 env[1340]: time="2024-04-12T18:29:49.089079668Z" level=info msg="Start event monitor" Apr 12 18:29:49.102623 env[1340]: time="2024-04-12T18:29:49.089102106Z" level=info msg="Start snapshots syncer" Apr 12 18:29:49.102623 env[1340]: time="2024-04-12T18:29:49.089112544Z" level=info msg="Start cni network conf syncer for default" Apr 12 18:29:49.102623 env[1340]: time="2024-04-12T18:29:49.089120344Z" level=info msg="Start streaming server" Apr 12 18:29:49.102623 env[1340]: time="2024-04-12T18:29:49.091003451Z" level=info msg="containerd successfully booted in 0.098839s" Apr 12 18:29:49.086535 systemd[1]: Started containerd.service. Apr 12 18:29:49.105166 tar[1335]: ./bandwidth Apr 12 18:29:49.163568 bash[1369]: Updated "/home/core/.ssh/authorized_keys" Apr 12 18:29:49.164479 systemd[1]: Finished update-ssh-keys-after-ignition.service. Apr 12 18:29:49.225585 tar[1335]: ./ptp Apr 12 18:29:49.280150 dbus-daemon[1313]: [system] SELinux support is enabled Apr 12 18:29:49.286124 dbus-daemon[1313]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 12 18:29:49.280310 systemd[1]: Started dbus.service. Apr 12 18:29:49.285567 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 12 18:29:49.285588 systemd[1]: Reached target system-config.target. Apr 12 18:29:49.292681 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 12 18:29:49.292704 systemd[1]: Reached target user-config.target. Apr 12 18:29:49.298870 systemd[1]: Started systemd-logind.service. Apr 12 18:29:49.326902 systemd[1]: nvidia.service: Deactivated successfully. Apr 12 18:29:49.345416 tar[1335]: ./vlan Apr 12 18:29:49.446727 tar[1335]: ./host-device Apr 12 18:29:49.540110 tar[1335]: ./tuning Apr 12 18:29:49.573003 tar[1337]: linux-arm64/LICENSE Apr 12 18:29:49.573144 tar[1337]: linux-arm64/README.md Apr 12 18:29:49.579113 systemd[1]: Finished prepare-helm.service. Apr 12 18:29:49.601423 tar[1335]: ./vrf Apr 12 18:29:49.631177 tar[1335]: ./sbr Apr 12 18:29:49.660173 tar[1335]: ./tap Apr 12 18:29:49.694518 tar[1335]: ./dhcp Apr 12 18:29:49.832531 update_engine[1329]: I0412 18:29:49.808305 1329 main.cc:92] Flatcar Update Engine starting Apr 12 18:29:49.838608 tar[1335]: ./static Apr 12 18:29:49.881328 tar[1335]: ./firewall Apr 12 18:29:49.889493 systemd[1]: Started update-engine.service. Apr 12 18:29:49.889883 update_engine[1329]: I0412 18:29:49.889539 1329 update_check_scheduler.cc:74] Next update check in 10m6s Apr 12 18:29:49.898355 systemd[1]: Started locksmithd.service. Apr 12 18:29:49.916353 systemd[1]: Finished prepare-critools.service. Apr 12 18:29:49.936751 tar[1335]: ./macvlan Apr 12 18:29:49.970222 tar[1335]: ./dummy Apr 12 18:29:50.003158 tar[1335]: ./bridge Apr 12 18:29:50.038896 tar[1335]: ./ipvlan Apr 12 18:29:50.071985 tar[1335]: ./portmap Apr 12 18:29:50.103536 tar[1335]: ./host-local Apr 12 18:29:50.215799 systemd[1]: Finished prepare-cni-plugins.service. Apr 12 18:29:51.791827 locksmithd[1416]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 12 18:29:52.121433 sshd_keygen[1333]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 12 18:29:52.138526 systemd[1]: Finished sshd-keygen.service. Apr 12 18:29:52.144217 systemd[1]: Starting issuegen.service... Apr 12 18:29:52.148881 systemd[1]: Started waagent.service. Apr 12 18:29:52.153412 systemd[1]: issuegen.service: Deactivated successfully. Apr 12 18:29:52.153576 systemd[1]: Finished issuegen.service. Apr 12 18:29:52.159243 systemd[1]: Starting systemd-user-sessions.service... Apr 12 18:29:52.185548 systemd[1]: Finished systemd-user-sessions.service. Apr 12 18:29:52.192209 systemd[1]: Started getty@tty1.service. Apr 12 18:29:52.198260 systemd[1]: Started serial-getty@ttyAMA0.service. Apr 12 18:29:52.203301 systemd[1]: Reached target getty.target. Apr 12 18:29:52.208166 systemd[1]: Reached target multi-user.target. Apr 12 18:29:52.214513 systemd[1]: Starting systemd-update-utmp-runlevel.service... Apr 12 18:29:52.227043 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Apr 12 18:29:52.227236 systemd[1]: Finished systemd-update-utmp-runlevel.service. Apr 12 18:29:52.233028 systemd[1]: Startup finished in 731ms (kernel) + 17.884s (initrd) + 29.363s (userspace) = 47.978s. Apr 12 18:29:53.001503 login[1438]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Apr 12 18:29:53.022514 login[1437]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 12 18:29:53.087114 systemd[1]: Created slice user-500.slice. Apr 12 18:29:53.088760 systemd[1]: Starting user-runtime-dir@500.service... Apr 12 18:29:53.091123 systemd-logind[1326]: New session 1 of user core. Apr 12 18:29:53.130276 systemd[1]: Finished user-runtime-dir@500.service. Apr 12 18:29:53.131702 systemd[1]: Starting user@500.service... Apr 12 18:29:53.166536 (systemd)[1441]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:29:53.414340 systemd[1441]: Queued start job for default target default.target. Apr 12 18:29:53.414845 systemd[1441]: Reached target paths.target. Apr 12 18:29:53.414865 systemd[1441]: Reached target sockets.target. Apr 12 18:29:53.414875 systemd[1441]: Reached target timers.target. Apr 12 18:29:53.414885 systemd[1441]: Reached target basic.target. Apr 12 18:29:53.414984 systemd[1]: Started user@500.service. Apr 12 18:29:53.415851 systemd[1]: Started session-1.scope. Apr 12 18:29:53.416329 systemd[1441]: Reached target default.target. Apr 12 18:29:53.416477 systemd[1441]: Startup finished in 244ms. Apr 12 18:29:54.003241 login[1438]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 12 18:29:54.007420 systemd[1]: Started session-2.scope. Apr 12 18:29:54.007912 systemd-logind[1326]: New session 2 of user core. Apr 12 18:29:59.927089 waagent[1434]: 2024-04-12T18:29:59.926949Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Apr 12 18:29:59.934169 waagent[1434]: 2024-04-12T18:29:59.934058Z INFO Daemon Daemon OS: flatcar 3510.3.3 Apr 12 18:29:59.939149 waagent[1434]: 2024-04-12T18:29:59.939039Z INFO Daemon Daemon Python: 3.9.16 Apr 12 18:29:59.946301 waagent[1434]: 2024-04-12T18:29:59.946187Z INFO Daemon Daemon Run daemon Apr 12 18:29:59.950786 waagent[1434]: 2024-04-12T18:29:59.950697Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.3' Apr 12 18:29:59.967714 waagent[1434]: 2024-04-12T18:29:59.967560Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Apr 12 18:29:59.982743 waagent[1434]: 2024-04-12T18:29:59.982594Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 12 18:29:59.993324 waagent[1434]: 2024-04-12T18:29:59.993228Z INFO Daemon Daemon cloud-init is enabled: False Apr 12 18:29:59.998786 waagent[1434]: 2024-04-12T18:29:59.998687Z INFO Daemon Daemon Using waagent for provisioning Apr 12 18:30:00.004944 waagent[1434]: 2024-04-12T18:30:00.004856Z INFO Daemon Daemon Activate resource disk Apr 12 18:30:00.009837 waagent[1434]: 2024-04-12T18:30:00.009749Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 12 18:30:00.024534 waagent[1434]: 2024-04-12T18:30:00.024435Z INFO Daemon Daemon Found device: None Apr 12 18:30:00.029383 waagent[1434]: 2024-04-12T18:30:00.029287Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 12 18:30:00.038311 waagent[1434]: 2024-04-12T18:30:00.038214Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 12 18:30:00.050538 waagent[1434]: 2024-04-12T18:30:00.050461Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 12 18:30:00.056656 waagent[1434]: 2024-04-12T18:30:00.056567Z INFO Daemon Daemon Running default provisioning handler Apr 12 18:30:00.069877 waagent[1434]: 2024-04-12T18:30:00.069733Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Apr 12 18:30:00.084991 waagent[1434]: 2024-04-12T18:30:00.084827Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 12 18:30:00.094784 waagent[1434]: 2024-04-12T18:30:00.094686Z INFO Daemon Daemon cloud-init is enabled: False Apr 12 18:30:00.100387 waagent[1434]: 2024-04-12T18:30:00.100291Z INFO Daemon Daemon Copying ovf-env.xml Apr 12 18:30:00.187865 waagent[1434]: 2024-04-12T18:30:00.187669Z INFO Daemon Daemon Successfully mounted dvd Apr 12 18:30:00.320743 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 12 18:30:00.551140 waagent[1434]: 2024-04-12T18:30:00.550954Z INFO Daemon Daemon Detect protocol endpoint Apr 12 18:30:00.556454 waagent[1434]: 2024-04-12T18:30:00.556362Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 12 18:30:00.562383 waagent[1434]: 2024-04-12T18:30:00.562297Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 12 18:30:00.568931 waagent[1434]: 2024-04-12T18:30:00.568852Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 12 18:30:00.574503 waagent[1434]: 2024-04-12T18:30:00.574423Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 12 18:30:00.579708 waagent[1434]: 2024-04-12T18:30:00.579631Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 12 18:30:00.787499 waagent[1434]: 2024-04-12T18:30:00.787427Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 12 18:30:00.795043 waagent[1434]: 2024-04-12T18:30:00.794994Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 12 18:30:00.800370 waagent[1434]: 2024-04-12T18:30:00.800289Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 12 18:30:01.677457 waagent[1434]: 2024-04-12T18:30:01.677291Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 12 18:30:01.692589 waagent[1434]: 2024-04-12T18:30:01.692506Z INFO Daemon Daemon Forcing an update of the goal state.. Apr 12 18:30:01.699072 waagent[1434]: 2024-04-12T18:30:01.698986Z INFO Daemon Daemon Fetching goal state [incarnation 1] Apr 12 18:30:01.785807 waagent[1434]: 2024-04-12T18:30:01.785651Z INFO Daemon Daemon Found private key matching thumbprint 61940AF45AAC499280C07AA9535BB27C0F535DDE Apr 12 18:30:01.794482 waagent[1434]: 2024-04-12T18:30:01.794394Z INFO Daemon Daemon Certificate with thumbprint 4DE2088897BCDF88FC68171168680CC66D163809 has no matching private key. Apr 12 18:30:01.804182 waagent[1434]: 2024-04-12T18:30:01.804076Z INFO Daemon Daemon Fetch goal state completed Apr 12 18:30:01.832653 waagent[1434]: 2024-04-12T18:30:01.832593Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: bf2169c2-5345-4f5e-ba97-c22fc8b04200 New eTag: 11842760689899712867] Apr 12 18:30:01.844185 waagent[1434]: 2024-04-12T18:30:01.844102Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Apr 12 18:30:01.859612 waagent[1434]: 2024-04-12T18:30:01.859549Z INFO Daemon Daemon Starting provisioning Apr 12 18:30:01.865718 waagent[1434]: 2024-04-12T18:30:01.865627Z INFO Daemon Daemon Handle ovf-env.xml. Apr 12 18:30:01.871056 waagent[1434]: 2024-04-12T18:30:01.870981Z INFO Daemon Daemon Set hostname [ci-3510.3.3-a-e21a461a74] Apr 12 18:30:01.916582 waagent[1434]: 2024-04-12T18:30:01.916437Z INFO Daemon Daemon Publish hostname [ci-3510.3.3-a-e21a461a74] Apr 12 18:30:01.923057 waagent[1434]: 2024-04-12T18:30:01.922960Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 12 18:30:01.930663 waagent[1434]: 2024-04-12T18:30:01.930555Z INFO Daemon Daemon Primary interface is [eth0] Apr 12 18:30:01.947649 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Apr 12 18:30:01.947824 systemd[1]: Stopped systemd-networkd-wait-online.service. Apr 12 18:30:01.947882 systemd[1]: Stopping systemd-networkd-wait-online.service... Apr 12 18:30:01.948144 systemd[1]: Stopping systemd-networkd.service... Apr 12 18:30:01.955125 systemd-networkd[1207]: eth0: DHCPv6 lease lost Apr 12 18:30:01.956908 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 12 18:30:01.957110 systemd[1]: Stopped systemd-networkd.service. Apr 12 18:30:01.959215 systemd[1]: Starting systemd-networkd.service... Apr 12 18:30:01.986834 systemd-networkd[1486]: enP811s1: Link UP Apr 12 18:30:01.986846 systemd-networkd[1486]: enP811s1: Gained carrier Apr 12 18:30:01.987791 systemd-networkd[1486]: eth0: Link UP Apr 12 18:30:01.987801 systemd-networkd[1486]: eth0: Gained carrier Apr 12 18:30:01.988380 systemd-networkd[1486]: lo: Link UP Apr 12 18:30:01.988391 systemd-networkd[1486]: lo: Gained carrier Apr 12 18:30:01.988642 systemd-networkd[1486]: eth0: Gained IPv6LL Apr 12 18:30:01.989787 systemd-networkd[1486]: Enumeration completed Apr 12 18:30:01.989908 systemd[1]: Started systemd-networkd.service. Apr 12 18:30:01.991684 systemd[1]: Starting systemd-networkd-wait-online.service... Apr 12 18:30:01.997951 waagent[1434]: 2024-04-12T18:30:01.991973Z INFO Daemon Daemon Create user account if not exists Apr 12 18:30:01.998659 waagent[1434]: 2024-04-12T18:30:01.998570Z INFO Daemon Daemon User core already exists, skip useradd Apr 12 18:30:02.005806 systemd-networkd[1486]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:30:02.006945 waagent[1434]: 2024-04-12T18:30:02.006837Z INFO Daemon Daemon Configure sudoer Apr 12 18:30:02.012231 waagent[1434]: 2024-04-12T18:30:02.012139Z INFO Daemon Daemon Configure sshd Apr 12 18:30:02.017576 waagent[1434]: 2024-04-12T18:30:02.017313Z INFO Daemon Daemon Deploy ssh public key. Apr 12 18:30:02.033178 systemd-networkd[1486]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Apr 12 18:30:02.036207 systemd[1]: Finished systemd-networkd-wait-online.service. Apr 12 18:30:03.275408 waagent[1434]: 2024-04-12T18:30:03.275333Z INFO Daemon Daemon Provisioning complete Apr 12 18:30:03.297631 waagent[1434]: 2024-04-12T18:30:03.297555Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 12 18:30:03.304341 waagent[1434]: 2024-04-12T18:30:03.304251Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 12 18:30:03.314706 waagent[1434]: 2024-04-12T18:30:03.314621Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Apr 12 18:30:03.632042 waagent[1495]: 2024-04-12T18:30:03.631886Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Apr 12 18:30:03.633243 waagent[1495]: 2024-04-12T18:30:03.633174Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 12 18:30:03.633515 waagent[1495]: 2024-04-12T18:30:03.633466Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 12 18:30:03.646704 waagent[1495]: 2024-04-12T18:30:03.646592Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Apr 12 18:30:03.647103 waagent[1495]: 2024-04-12T18:30:03.647022Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Apr 12 18:30:03.718839 waagent[1495]: 2024-04-12T18:30:03.718698Z INFO ExtHandler ExtHandler Found private key matching thumbprint 61940AF45AAC499280C07AA9535BB27C0F535DDE Apr 12 18:30:03.719270 waagent[1495]: 2024-04-12T18:30:03.719213Z INFO ExtHandler ExtHandler Certificate with thumbprint 4DE2088897BCDF88FC68171168680CC66D163809 has no matching private key. Apr 12 18:30:03.719607 waagent[1495]: 2024-04-12T18:30:03.719556Z INFO ExtHandler ExtHandler Fetch goal state completed Apr 12 18:30:03.744051 waagent[1495]: 2024-04-12T18:30:03.743990Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 44bcb3fe-8364-43ef-90da-8fcc1bf8e41b New eTag: 11842760689899712867] Apr 12 18:30:03.744879 waagent[1495]: 2024-04-12T18:30:03.744815Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Apr 12 18:30:03.876957 waagent[1495]: 2024-04-12T18:30:03.876814Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.3; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 12 18:30:03.908451 waagent[1495]: 2024-04-12T18:30:03.908297Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1495 Apr 12 18:30:03.912610 waagent[1495]: 2024-04-12T18:30:03.912524Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.3', '', 'Flatcar Container Linux by Kinvolk'] Apr 12 18:30:03.914220 waagent[1495]: 2024-04-12T18:30:03.914130Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 12 18:30:04.061360 waagent[1495]: 2024-04-12T18:30:04.061300Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 12 18:30:04.061979 waagent[1495]: 2024-04-12T18:30:04.061921Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 12 18:30:04.070189 waagent[1495]: 2024-04-12T18:30:04.070115Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 12 18:30:04.070773 waagent[1495]: 2024-04-12T18:30:04.070711Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Apr 12 18:30:04.072068 waagent[1495]: 2024-04-12T18:30:04.071988Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Apr 12 18:30:04.073714 waagent[1495]: 2024-04-12T18:30:04.073634Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 12 18:30:04.074422 waagent[1495]: 2024-04-12T18:30:04.074359Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 12 18:30:04.074684 waagent[1495]: 2024-04-12T18:30:04.074632Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 12 18:30:04.075436 waagent[1495]: 2024-04-12T18:30:04.075367Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 12 18:30:04.076169 waagent[1495]: 2024-04-12T18:30:04.076089Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 12 18:30:04.076694 waagent[1495]: 2024-04-12T18:30:04.076627Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 12 18:30:04.076956 waagent[1495]: 2024-04-12T18:30:04.076844Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 12 18:30:04.077256 waagent[1495]: 2024-04-12T18:30:04.077186Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 12 18:30:04.077397 waagent[1495]: 2024-04-12T18:30:04.077334Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 12 18:30:04.077397 waagent[1495]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 12 18:30:04.077397 waagent[1495]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Apr 12 18:30:04.077397 waagent[1495]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 12 18:30:04.077397 waagent[1495]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 12 18:30:04.077397 waagent[1495]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 12 18:30:04.077397 waagent[1495]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 12 18:30:04.078057 waagent[1495]: 2024-04-12T18:30:04.077977Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 12 18:30:04.080694 waagent[1495]: 2024-04-12T18:30:04.080541Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 12 18:30:04.081204 waagent[1495]: 2024-04-12T18:30:04.081121Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 12 18:30:04.081595 waagent[1495]: 2024-04-12T18:30:04.081528Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 12 18:30:04.082316 waagent[1495]: 2024-04-12T18:30:04.082239Z INFO EnvHandler ExtHandler Configure routes Apr 12 18:30:04.085342 waagent[1495]: 2024-04-12T18:30:04.085239Z INFO EnvHandler ExtHandler Gateway:None Apr 12 18:30:04.086016 waagent[1495]: 2024-04-12T18:30:04.085964Z INFO EnvHandler ExtHandler Routes:None Apr 12 18:30:04.096709 waagent[1495]: 2024-04-12T18:30:04.096633Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Apr 12 18:30:04.097462 waagent[1495]: 2024-04-12T18:30:04.097404Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Apr 12 18:30:04.098494 waagent[1495]: 2024-04-12T18:30:04.098433Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Apr 12 18:30:04.127407 waagent[1495]: 2024-04-12T18:30:04.127330Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Apr 12 18:30:04.133595 waagent[1495]: 2024-04-12T18:30:04.133516Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1486' Apr 12 18:30:04.207911 waagent[1495]: 2024-04-12T18:30:04.207758Z INFO MonitorHandler ExtHandler Network interfaces: Apr 12 18:30:04.207911 waagent[1495]: Executing ['ip', '-a', '-o', 'link']: Apr 12 18:30:04.207911 waagent[1495]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 12 18:30:04.207911 waagent[1495]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:6d:66 brd ff:ff:ff:ff:ff:ff Apr 12 18:30:04.207911 waagent[1495]: 3: enP811s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:6d:66 brd ff:ff:ff:ff:ff:ff\ altname enP811p0s2 Apr 12 18:30:04.207911 waagent[1495]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 12 18:30:04.207911 waagent[1495]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 12 18:30:04.207911 waagent[1495]: 2: eth0 inet 10.200.20.12/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 12 18:30:04.207911 waagent[1495]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 12 18:30:04.207911 waagent[1495]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Apr 12 18:30:04.207911 waagent[1495]: 2: eth0 inet6 fe80::20d:3aff:fe07:6d66/64 scope link \ valid_lft forever preferred_lft forever Apr 12 18:30:04.238542 waagent[1495]: 2024-04-12T18:30:04.238473Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.10.0.8 -- exiting Apr 12 18:30:04.317348 waagent[1434]: 2024-04-12T18:30:04.317221Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Apr 12 18:30:04.321227 waagent[1434]: 2024-04-12T18:30:04.321164Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.10.0.8 to be the latest agent Apr 12 18:30:05.549410 waagent[1524]: 2024-04-12T18:30:05.549294Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.10.0.8) Apr 12 18:30:05.550171 waagent[1524]: 2024-04-12T18:30:05.550106Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.3 Apr 12 18:30:05.550306 waagent[1524]: 2024-04-12T18:30:05.550260Z INFO ExtHandler ExtHandler Python: 3.9.16 Apr 12 18:30:05.550431 waagent[1524]: 2024-04-12T18:30:05.550389Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Apr 12 18:30:05.559452 waagent[1524]: 2024-04-12T18:30:05.559308Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.3; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 12 18:30:05.559906 waagent[1524]: 2024-04-12T18:30:05.559845Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 12 18:30:05.560070 waagent[1524]: 2024-04-12T18:30:05.560014Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 12 18:30:05.574331 waagent[1524]: 2024-04-12T18:30:05.574232Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 12 18:30:05.586801 waagent[1524]: 2024-04-12T18:30:05.586736Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.149 Apr 12 18:30:05.587949 waagent[1524]: 2024-04-12T18:30:05.587885Z INFO ExtHandler Apr 12 18:30:05.588137 waagent[1524]: 2024-04-12T18:30:05.588086Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d0a46efc-939f-4c77-98fd-3ab390c9db5f eTag: 11842760689899712867 source: Fabric] Apr 12 18:30:05.588923 waagent[1524]: 2024-04-12T18:30:05.588863Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 12 18:30:05.590201 waagent[1524]: 2024-04-12T18:30:05.590137Z INFO ExtHandler Apr 12 18:30:05.590341 waagent[1524]: 2024-04-12T18:30:05.590295Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 12 18:30:05.597159 waagent[1524]: 2024-04-12T18:30:05.597095Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 12 18:30:05.597709 waagent[1524]: 2024-04-12T18:30:05.597657Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Apr 12 18:30:05.622654 waagent[1524]: 2024-04-12T18:30:05.622587Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Apr 12 18:30:05.697799 waagent[1524]: 2024-04-12T18:30:05.697640Z INFO ExtHandler Downloaded certificate {'thumbprint': '61940AF45AAC499280C07AA9535BB27C0F535DDE', 'hasPrivateKey': True} Apr 12 18:30:05.698986 waagent[1524]: 2024-04-12T18:30:05.698923Z INFO ExtHandler Downloaded certificate {'thumbprint': '4DE2088897BCDF88FC68171168680CC66D163809', 'hasPrivateKey': False} Apr 12 18:30:05.700094 waagent[1524]: 2024-04-12T18:30:05.700018Z INFO ExtHandler Fetch goal state completed Apr 12 18:30:05.726599 waagent[1524]: 2024-04-12T18:30:05.726451Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.7 1 Nov 2022 (Library: OpenSSL 3.0.7 1 Nov 2022) Apr 12 18:30:05.740399 waagent[1524]: 2024-04-12T18:30:05.740274Z INFO ExtHandler ExtHandler WALinuxAgent-2.10.0.8 running as process 1524 Apr 12 18:30:05.744187 waagent[1524]: 2024-04-12T18:30:05.744096Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.3', '', 'Flatcar Container Linux by Kinvolk'] Apr 12 18:30:05.745733 waagent[1524]: 2024-04-12T18:30:05.745663Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 12 18:30:05.751452 waagent[1524]: 2024-04-12T18:30:05.751381Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 12 18:30:05.751918 waagent[1524]: 2024-04-12T18:30:05.751858Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 12 18:30:05.760421 waagent[1524]: 2024-04-12T18:30:05.760347Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 12 18:30:05.761074 waagent[1524]: 2024-04-12T18:30:05.761002Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Apr 12 18:30:05.768226 waagent[1524]: 2024-04-12T18:30:05.768085Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 12 18:30:05.769348 waagent[1524]: 2024-04-12T18:30:05.769281Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 12 18:30:05.771080 waagent[1524]: 2024-04-12T18:30:05.770992Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 12 18:30:05.771947 waagent[1524]: 2024-04-12T18:30:05.771878Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 12 18:30:05.772276 waagent[1524]: 2024-04-12T18:30:05.772220Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 12 18:30:05.773012 waagent[1524]: 2024-04-12T18:30:05.772954Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 12 18:30:05.773456 waagent[1524]: 2024-04-12T18:30:05.773400Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 12 18:30:05.773456 waagent[1524]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 12 18:30:05.773456 waagent[1524]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Apr 12 18:30:05.773456 waagent[1524]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 12 18:30:05.773456 waagent[1524]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 12 18:30:05.773456 waagent[1524]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 12 18:30:05.773456 waagent[1524]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 12 18:30:05.776822 waagent[1524]: 2024-04-12T18:30:05.776651Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 12 18:30:05.777101 waagent[1524]: 2024-04-12T18:30:05.777007Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 12 18:30:05.777933 waagent[1524]: 2024-04-12T18:30:05.777853Z INFO EnvHandler ExtHandler Configure routes Apr 12 18:30:05.778105 waagent[1524]: 2024-04-12T18:30:05.778027Z INFO EnvHandler ExtHandler Gateway:None Apr 12 18:30:05.778354 waagent[1524]: 2024-04-12T18:30:05.778304Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 12 18:30:05.778446 waagent[1524]: 2024-04-12T18:30:05.778197Z INFO EnvHandler ExtHandler Routes:None Apr 12 18:30:05.779404 waagent[1524]: 2024-04-12T18:30:05.779047Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 12 18:30:05.780033 waagent[1524]: 2024-04-12T18:30:05.779960Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 12 18:30:05.783255 waagent[1524]: 2024-04-12T18:30:05.783120Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 12 18:30:05.783957 waagent[1524]: 2024-04-12T18:30:05.783896Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 12 18:30:05.785374 waagent[1524]: 2024-04-12T18:30:05.785291Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 12 18:30:05.801506 waagent[1524]: 2024-04-12T18:30:05.801228Z INFO ExtHandler ExtHandler Downloading agent manifest Apr 12 18:30:05.808280 waagent[1524]: 2024-04-12T18:30:05.808179Z INFO MonitorHandler ExtHandler Network interfaces: Apr 12 18:30:05.808280 waagent[1524]: Executing ['ip', '-a', '-o', 'link']: Apr 12 18:30:05.808280 waagent[1524]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 12 18:30:05.808280 waagent[1524]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:6d:66 brd ff:ff:ff:ff:ff:ff Apr 12 18:30:05.808280 waagent[1524]: 3: enP811s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:6d:66 brd ff:ff:ff:ff:ff:ff\ altname enP811p0s2 Apr 12 18:30:05.808280 waagent[1524]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 12 18:30:05.808280 waagent[1524]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 12 18:30:05.808280 waagent[1524]: 2: eth0 inet 10.200.20.12/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 12 18:30:05.808280 waagent[1524]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 12 18:30:05.808280 waagent[1524]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Apr 12 18:30:05.808280 waagent[1524]: 2: eth0 inet6 fe80::20d:3aff:fe07:6d66/64 scope link \ valid_lft forever preferred_lft forever Apr 12 18:30:05.822115 waagent[1524]: 2024-04-12T18:30:05.822004Z INFO ExtHandler ExtHandler Apr 12 18:30:05.823319 waagent[1524]: 2024-04-12T18:30:05.823247Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 7e61b062-9832-403b-9a82-0bc1c50e0b0f correlation e18d58ab-1eb6-42c2-9543-3851c19a45eb created: 2024-04-12T18:28:10.722420Z] Apr 12 18:30:05.826949 waagent[1524]: 2024-04-12T18:30:05.826861Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 12 18:30:05.833733 waagent[1524]: 2024-04-12T18:30:05.833649Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 11 ms] Apr 12 18:30:05.857001 waagent[1524]: 2024-04-12T18:30:05.856929Z INFO ExtHandler ExtHandler Looking for existing remote access users. Apr 12 18:30:05.883999 waagent[1524]: 2024-04-12T18:30:05.883907Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.10.0.8 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 4568DC57-11BA-49D7-A7DB-E9E2F7C35A30;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Apr 12 18:30:06.104139 waagent[1524]: 2024-04-12T18:30:06.103925Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 12 18:30:06.104139 waagent[1524]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 12 18:30:06.104139 waagent[1524]: pkts bytes target prot opt in out source destination Apr 12 18:30:06.104139 waagent[1524]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 12 18:30:06.104139 waagent[1524]: pkts bytes target prot opt in out source destination Apr 12 18:30:06.104139 waagent[1524]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 12 18:30:06.104139 waagent[1524]: pkts bytes target prot opt in out source destination Apr 12 18:30:06.104139 waagent[1524]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 12 18:30:06.104139 waagent[1524]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 12 18:30:06.104139 waagent[1524]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 12 18:30:06.112689 waagent[1524]: 2024-04-12T18:30:06.112538Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 12 18:30:06.112689 waagent[1524]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 12 18:30:06.112689 waagent[1524]: pkts bytes target prot opt in out source destination Apr 12 18:30:06.112689 waagent[1524]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 12 18:30:06.112689 waagent[1524]: pkts bytes target prot opt in out source destination Apr 12 18:30:06.112689 waagent[1524]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 12 18:30:06.112689 waagent[1524]: pkts bytes target prot opt in out source destination Apr 12 18:30:06.112689 waagent[1524]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 12 18:30:06.112689 waagent[1524]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 12 18:30:06.112689 waagent[1524]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 12 18:30:06.113288 waagent[1524]: 2024-04-12T18:30:06.113233Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 12 18:30:25.484811 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Apr 12 18:30:35.280148 update_engine[1329]: I0412 18:30:35.280107 1329 update_attempter.cc:509] Updating boot flags... Apr 12 18:30:55.234383 systemd[1]: Created slice system-sshd.slice. Apr 12 18:30:55.235470 systemd[1]: Started sshd@0-10.200.20.12:22-10.200.12.6:33110.service. Apr 12 18:30:55.873987 sshd[1642]: Accepted publickey for core from 10.200.12.6 port 33110 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:30:55.893141 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:30:55.897705 systemd[1]: Started session-3.scope. Apr 12 18:30:55.898136 systemd-logind[1326]: New session 3 of user core. Apr 12 18:30:56.236941 systemd[1]: Started sshd@1-10.200.20.12:22-10.200.12.6:33116.service. Apr 12 18:30:56.635775 sshd[1647]: Accepted publickey for core from 10.200.12.6 port 33116 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:30:56.636876 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:30:56.641096 systemd[1]: Started session-4.scope. Apr 12 18:30:56.642092 systemd-logind[1326]: New session 4 of user core. Apr 12 18:30:56.942182 sshd[1647]: pam_unix(sshd:session): session closed for user core Apr 12 18:30:56.944803 systemd[1]: sshd@1-10.200.20.12:22-10.200.12.6:33116.service: Deactivated successfully. Apr 12 18:30:56.945506 systemd[1]: session-4.scope: Deactivated successfully. Apr 12 18:30:56.946036 systemd-logind[1326]: Session 4 logged out. Waiting for processes to exit. Apr 12 18:30:56.946964 systemd-logind[1326]: Removed session 4. Apr 12 18:30:57.012340 systemd[1]: Started sshd@2-10.200.20.12:22-10.200.12.6:33128.service. Apr 12 18:30:57.408296 sshd[1653]: Accepted publickey for core from 10.200.12.6 port 33128 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:30:57.409807 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:30:57.413838 systemd[1]: Started session-5.scope. Apr 12 18:30:57.414456 systemd-logind[1326]: New session 5 of user core. Apr 12 18:30:57.703075 sshd[1653]: pam_unix(sshd:session): session closed for user core Apr 12 18:30:57.705532 systemd[1]: sshd@2-10.200.20.12:22-10.200.12.6:33128.service: Deactivated successfully. Apr 12 18:30:57.706172 systemd[1]: session-5.scope: Deactivated successfully. Apr 12 18:30:57.706729 systemd-logind[1326]: Session 5 logged out. Waiting for processes to exit. Apr 12 18:30:57.707500 systemd-logind[1326]: Removed session 5. Apr 12 18:30:57.775661 systemd[1]: Started sshd@3-10.200.20.12:22-10.200.12.6:33130.service. Apr 12 18:30:58.204912 sshd[1659]: Accepted publickey for core from 10.200.12.6 port 33130 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:30:58.206181 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:30:58.209910 systemd-logind[1326]: New session 6 of user core. Apr 12 18:30:58.210356 systemd[1]: Started session-6.scope. Apr 12 18:30:58.519751 sshd[1659]: pam_unix(sshd:session): session closed for user core Apr 12 18:30:58.522526 systemd[1]: sshd@3-10.200.20.12:22-10.200.12.6:33130.service: Deactivated successfully. Apr 12 18:30:58.523223 systemd[1]: session-6.scope: Deactivated successfully. Apr 12 18:30:58.523750 systemd-logind[1326]: Session 6 logged out. Waiting for processes to exit. Apr 12 18:30:58.524566 systemd-logind[1326]: Removed session 6. Apr 12 18:30:58.586683 systemd[1]: Started sshd@4-10.200.20.12:22-10.200.12.6:33144.service. Apr 12 18:30:58.985539 sshd[1665]: Accepted publickey for core from 10.200.12.6 port 33144 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:30:58.986774 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:30:58.990570 systemd-logind[1326]: New session 7 of user core. Apr 12 18:30:58.990919 systemd[1]: Started session-7.scope. Apr 12 18:30:59.557596 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 12 18:30:59.557789 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Apr 12 18:31:00.292798 systemd[1]: Starting docker.service... Apr 12 18:31:00.355848 env[1683]: time="2024-04-12T18:31:00.355792014Z" level=info msg="Starting up" Apr 12 18:31:00.357118 env[1683]: time="2024-04-12T18:31:00.357057878Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:31:00.357217 env[1683]: time="2024-04-12T18:31:00.357203542Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:31:00.357287 env[1683]: time="2024-04-12T18:31:00.357271255Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:31:00.357343 env[1683]: time="2024-04-12T18:31:00.357330889Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:31:00.359612 env[1683]: time="2024-04-12T18:31:00.359589046Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:31:00.359708 env[1683]: time="2024-04-12T18:31:00.359693754Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:31:00.359772 env[1683]: time="2024-04-12T18:31:00.359757387Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:31:00.359901 env[1683]: time="2024-04-12T18:31:00.359882574Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:31:00.455378 env[1683]: time="2024-04-12T18:31:00.455332902Z" level=info msg="Loading containers: start." Apr 12 18:31:00.628083 kernel: Initializing XFRM netlink socket Apr 12 18:31:00.672909 env[1683]: time="2024-04-12T18:31:00.672874850Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 12 18:31:00.853880 systemd-networkd[1486]: docker0: Link UP Apr 12 18:31:00.872213 env[1683]: time="2024-04-12T18:31:00.872171043Z" level=info msg="Loading containers: done." Apr 12 18:31:00.881354 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck410562060-merged.mount: Deactivated successfully. Apr 12 18:31:00.895844 env[1683]: time="2024-04-12T18:31:00.895774902Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 12 18:31:00.896241 env[1683]: time="2024-04-12T18:31:00.896218495Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Apr 12 18:31:00.896429 env[1683]: time="2024-04-12T18:31:00.896412914Z" level=info msg="Daemon has completed initialization" Apr 12 18:31:00.925422 systemd[1]: Started docker.service. Apr 12 18:31:00.932324 env[1683]: time="2024-04-12T18:31:00.932260816Z" level=info msg="API listen on /run/docker.sock" Apr 12 18:31:00.948405 systemd[1]: Reloading. Apr 12 18:31:00.989172 /usr/lib/systemd/system-generators/torcx-generator[1816]: time="2024-04-12T18:31:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:31:00.989566 /usr/lib/systemd/system-generators/torcx-generator[1816]: time="2024-04-12T18:31:00Z" level=info msg="torcx already run" Apr 12 18:31:01.079984 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:31:01.080003 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:31:01.095262 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:31:01.177761 systemd[1]: Started kubelet.service. Apr 12 18:31:01.231858 kubelet[1872]: E0412 18:31:01.231207 1872 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:31:01.233704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:31:01.233822 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:31:05.548292 env[1340]: time="2024-04-12T18:31:05.547923032Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.3\"" Apr 12 18:31:06.504993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408904580.mount: Deactivated successfully. Apr 12 18:31:08.933732 env[1340]: time="2024-04-12T18:31:08.933675837Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:08.941214 env[1340]: time="2024-04-12T18:31:08.941171781Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:08.945121 env[1340]: time="2024-04-12T18:31:08.945088678Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:08.952819 env[1340]: time="2024-04-12T18:31:08.952775485Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:08.953612 env[1340]: time="2024-04-12T18:31:08.953583814Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.3\" returns image reference \"sha256:2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794\"" Apr 12 18:31:08.962687 env[1340]: time="2024-04-12T18:31:08.962641902Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.3\"" Apr 12 18:31:11.449412 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 12 18:31:11.449584 systemd[1]: Stopped kubelet.service. Apr 12 18:31:11.450974 systemd[1]: Started kubelet.service. Apr 12 18:31:11.510213 kubelet[1896]: E0412 18:31:11.510159 1896 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:31:11.512519 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:31:11.512653 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:31:11.587366 env[1340]: time="2024-04-12T18:31:11.587311607Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:11.594791 env[1340]: time="2024-04-12T18:31:11.594745524Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:11.600523 env[1340]: time="2024-04-12T18:31:11.600488857Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:11.605335 env[1340]: time="2024-04-12T18:31:11.605295387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:11.606046 env[1340]: time="2024-04-12T18:31:11.606018928Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.3\" returns image reference \"sha256:121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195\"" Apr 12 18:31:11.615216 env[1340]: time="2024-04-12T18:31:11.615183504Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.3\"" Apr 12 18:31:13.203019 env[1340]: time="2024-04-12T18:31:13.202963949Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:13.210896 env[1340]: time="2024-04-12T18:31:13.210840420Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:13.217599 env[1340]: time="2024-04-12T18:31:13.217563860Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:13.222546 env[1340]: time="2024-04-12T18:31:13.222501838Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:13.222851 env[1340]: time="2024-04-12T18:31:13.222819974Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.3\" returns image reference \"sha256:4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb\"" Apr 12 18:31:13.232569 env[1340]: time="2024-04-12T18:31:13.232527663Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.3\"" Apr 12 18:31:14.333831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1169315333.mount: Deactivated successfully. Apr 12 18:31:15.196419 env[1340]: time="2024-04-12T18:31:15.196351258Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:15.205702 env[1340]: time="2024-04-12T18:31:15.205666292Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:15.212150 env[1340]: time="2024-04-12T18:31:15.212118257Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:15.217177 env[1340]: time="2024-04-12T18:31:15.217132927Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:15.217758 env[1340]: time="2024-04-12T18:31:15.217727163Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.3\" returns image reference \"sha256:0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775\"" Apr 12 18:31:15.226693 env[1340]: time="2024-04-12T18:31:15.226655666Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 12 18:31:16.087079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount66506273.mount: Deactivated successfully. Apr 12 18:31:19.076375 env[1340]: time="2024-04-12T18:31:19.076326265Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:19.084455 env[1340]: time="2024-04-12T18:31:19.084414803Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:19.088192 env[1340]: time="2024-04-12T18:31:19.088137713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:19.092740 env[1340]: time="2024-04-12T18:31:19.092705807Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:19.093706 env[1340]: time="2024-04-12T18:31:19.093678382Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 12 18:31:19.102918 env[1340]: time="2024-04-12T18:31:19.102880965Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 12 18:31:19.801311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1478275488.mount: Deactivated successfully. Apr 12 18:31:19.823867 env[1340]: time="2024-04-12T18:31:19.823823036Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:19.834245 env[1340]: time="2024-04-12T18:31:19.834210300Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:19.838563 env[1340]: time="2024-04-12T18:31:19.838522091Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:19.844881 env[1340]: time="2024-04-12T18:31:19.844846467Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:19.845308 env[1340]: time="2024-04-12T18:31:19.845278518Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 12 18:31:19.853970 env[1340]: time="2024-04-12T18:31:19.853927418Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Apr 12 18:31:20.464968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount410279086.mount: Deactivated successfully. Apr 12 18:31:21.699431 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 12 18:31:21.699611 systemd[1]: Stopped kubelet.service. Apr 12 18:31:21.701057 systemd[1]: Started kubelet.service. Apr 12 18:31:21.757814 kubelet[1927]: E0412 18:31:21.757772 1927 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:31:21.760096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:31:21.760221 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:31:24.174767 env[1340]: time="2024-04-12T18:31:24.174720930Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:24.181425 env[1340]: time="2024-04-12T18:31:24.181385891Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:24.187428 env[1340]: time="2024-04-12T18:31:24.187383053Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:24.192401 env[1340]: time="2024-04-12T18:31:24.192358875Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:24.193243 env[1340]: time="2024-04-12T18:31:24.193215664Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Apr 12 18:31:29.459121 systemd[1]: Stopped kubelet.service. Apr 12 18:31:29.474776 systemd[1]: Reloading. Apr 12 18:31:29.549277 /usr/lib/systemd/system-generators/torcx-generator[2020]: time="2024-04-12T18:31:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:31:29.549647 /usr/lib/systemd/system-generators/torcx-generator[2020]: time="2024-04-12T18:31:29Z" level=info msg="torcx already run" Apr 12 18:31:29.607737 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:31:29.607756 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:31:29.622995 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:31:29.712386 systemd[1]: Started kubelet.service. Apr 12 18:31:29.756284 kubelet[2078]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:31:29.756284 kubelet[2078]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:31:29.756284 kubelet[2078]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:31:29.756651 kubelet[2078]: I0412 18:31:29.756330 2078 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:31:30.505686 kubelet[2078]: I0412 18:31:30.505645 2078 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Apr 12 18:31:30.505686 kubelet[2078]: I0412 18:31:30.505678 2078 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:31:30.505895 kubelet[2078]: I0412 18:31:30.505874 2078 server.go:919] "Client rotation is on, will bootstrap in background" Apr 12 18:31:30.511190 kubelet[2078]: E0412 18:31:30.511164 2078 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.12:6443: connect: connection refused Apr 12 18:31:30.511241 kubelet[2078]: I0412 18:31:30.511226 2078 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:31:30.516944 kubelet[2078]: I0412 18:31:30.516907 2078 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:31:30.517506 kubelet[2078]: I0412 18:31:30.517484 2078 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:31:30.517907 kubelet[2078]: I0412 18:31:30.517889 2078 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 12 18:31:30.518041 kubelet[2078]: I0412 18:31:30.518029 2078 topology_manager.go:138] "Creating topology manager with none policy" Apr 12 18:31:30.518140 kubelet[2078]: I0412 18:31:30.518121 2078 container_manager_linux.go:301] "Creating device plugin manager" Apr 12 18:31:30.518317 kubelet[2078]: I0412 18:31:30.518304 2078 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:31:30.518489 kubelet[2078]: I0412 18:31:30.518478 2078 kubelet.go:396] "Attempting to sync node with API server" Apr 12 18:31:30.518569 kubelet[2078]: I0412 18:31:30.518558 2078 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:31:30.518644 kubelet[2078]: I0412 18:31:30.518635 2078 kubelet.go:312] "Adding apiserver pod source" Apr 12 18:31:30.518710 kubelet[2078]: I0412 18:31:30.518700 2078 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:31:30.521361 kubelet[2078]: I0412 18:31:30.521337 2078 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:31:30.521733 kubelet[2078]: I0412 18:31:30.521718 2078 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 12 18:31:30.521841 kubelet[2078]: W0412 18:31:30.521832 2078 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 12 18:31:30.522392 kubelet[2078]: I0412 18:31:30.522375 2078 server.go:1256] "Started kubelet" Apr 12 18:31:30.522607 kubelet[2078]: W0412 18:31:30.522570 2078 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.3-a-e21a461a74&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Apr 12 18:31:30.522693 kubelet[2078]: E0412 18:31:30.522682 2078 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.3-a-e21a461a74&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Apr 12 18:31:30.530640 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Apr 12 18:31:30.530765 kubelet[2078]: E0412 18:31:30.526189 2078 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.3-a-e21a461a74.17c59bf07fc406ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.3-a-e21a461a74,UID:ci-3510.3.3-a-e21a461a74,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.3-a-e21a461a74,},FirstTimestamp:2024-04-12 18:31:30.522351341 +0000 UTC m=+0.804707926,LastTimestamp:2024-04-12 18:31:30.522351341 +0000 UTC m=+0.804707926,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.3-a-e21a461a74,}" Apr 12 18:31:30.530765 kubelet[2078]: W0412 18:31:30.526293 2078 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Apr 12 18:31:30.530765 kubelet[2078]: E0412 18:31:30.526329 2078 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Apr 12 18:31:30.530765 kubelet[2078]: E0412 18:31:30.527532 2078 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:31:30.530765 kubelet[2078]: I0412 18:31:30.527658 2078 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 12 18:31:30.530765 kubelet[2078]: I0412 18:31:30.527901 2078 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 12 18:31:30.530765 kubelet[2078]: I0412 18:31:30.527951 2078 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:31:30.530981 kubelet[2078]: I0412 18:31:30.528656 2078 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:31:30.531210 kubelet[2078]: I0412 18:31:30.531187 2078 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:31:30.531678 kubelet[2078]: I0412 18:31:30.531650 2078 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 12 18:31:30.532899 kubelet[2078]: I0412 18:31:30.532861 2078 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Apr 12 18:31:30.532958 kubelet[2078]: I0412 18:31:30.532950 2078 reconciler_new.go:29] "Reconciler: start to sync state" Apr 12 18:31:30.533637 kubelet[2078]: E0412 18:31:30.533609 2078 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-e21a461a74\" not found" Apr 12 18:31:30.534772 kubelet[2078]: E0412 18:31:30.534740 2078 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="200ms" Apr 12 18:31:30.534850 kubelet[2078]: W0412 18:31:30.534812 2078 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Apr 12 18:31:30.534850 kubelet[2078]: E0412 18:31:30.534847 2078 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Apr 12 18:31:30.536134 kubelet[2078]: I0412 18:31:30.536107 2078 factory.go:221] Registration of the containerd container factory successfully Apr 12 18:31:30.536134 kubelet[2078]: I0412 18:31:30.536128 2078 factory.go:221] Registration of the systemd container factory successfully Apr 12 18:31:30.536225 kubelet[2078]: I0412 18:31:30.536205 2078 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 12 18:31:30.580625 kubelet[2078]: I0412 18:31:30.580595 2078 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 12 18:31:30.581750 kubelet[2078]: I0412 18:31:30.581729 2078 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 12 18:31:30.581876 kubelet[2078]: I0412 18:31:30.581866 2078 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 12 18:31:30.581953 kubelet[2078]: I0412 18:31:30.581943 2078 kubelet.go:2329] "Starting kubelet main sync loop" Apr 12 18:31:30.582051 kubelet[2078]: E0412 18:31:30.582042 2078 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:31:30.583266 kubelet[2078]: W0412 18:31:30.583230 2078 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Apr 12 18:31:30.583409 kubelet[2078]: E0412 18:31:30.583397 2078 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Apr 12 18:31:30.655816 kubelet[2078]: I0412 18:31:30.655790 2078 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.3-a-e21a461a74" Apr 12 18:31:30.656677 kubelet[2078]: E0412 18:31:30.656662 2078 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-3510.3.3-a-e21a461a74" Apr 12 18:31:30.657078 kubelet[2078]: I0412 18:31:30.657047 2078 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:31:30.657144 kubelet[2078]: I0412 18:31:30.657083 2078 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:31:30.657144 kubelet[2078]: I0412 18:31:30.657102 2078 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:31:30.661967 kubelet[2078]: I0412 18:31:30.661939 2078 policy_none.go:49] "None policy: Start" Apr 12 18:31:30.662755 kubelet[2078]: I0412 18:31:30.662729 2078 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 12 18:31:30.662830 kubelet[2078]: I0412 18:31:30.662761 2078 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:31:30.670373 systemd[1]: Created slice kubepods.slice. Apr 12 18:31:30.674984 systemd[1]: Created slice kubepods-burstable.slice. Apr 12 18:31:30.677992 systemd[1]: Created slice kubepods-besteffort.slice. Apr 12 18:31:30.683046 kubelet[2078]: E0412 18:31:30.683011 2078 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 12 18:31:30.687987 kubelet[2078]: I0412 18:31:30.687958 2078 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:31:30.688244 kubelet[2078]: I0412 18:31:30.688223 2078 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:31:30.690295 kubelet[2078]: E0412 18:31:30.690269 2078 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.3-a-e21a461a74\" not found" Apr 12 18:31:30.736169 kubelet[2078]: E0412 18:31:30.736129 2078 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="400ms" Apr 12 18:31:30.860098 kubelet[2078]: I0412 18:31:30.859109 2078 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.3-a-e21a461a74" Apr 12 18:31:30.860098 kubelet[2078]: E0412 18:31:30.859424 2078 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-3510.3.3-a-e21a461a74" Apr 12 18:31:30.883850 kubelet[2078]: I0412 18:31:30.883792 2078 topology_manager.go:215] "Topology Admit Handler" podUID="3e4868b618f0db5237f1e85b851ead58" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:30.885345 kubelet[2078]: I0412 18:31:30.885319 2078 topology_manager.go:215] "Topology Admit Handler" podUID="ead94c2c69c36521e0c1f95c4806670d" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:30.886982 kubelet[2078]: I0412 18:31:30.886957 2078 topology_manager.go:215] "Topology Admit Handler" podUID="d4db9e5adda04a9e3cb4cc0abd85127c" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:30.891767 systemd[1]: Created slice kubepods-burstable-pod3e4868b618f0db5237f1e85b851ead58.slice. Apr 12 18:31:30.905933 systemd[1]: Created slice kubepods-burstable-podead94c2c69c36521e0c1f95c4806670d.slice. Apr 12 18:31:30.914122 systemd[1]: Created slice kubepods-burstable-podd4db9e5adda04a9e3cb4cc0abd85127c.slice. Apr 12 18:31:30.934919 kubelet[2078]: I0412 18:31:30.934890 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e4868b618f0db5237f1e85b851ead58-ca-certs\") pod \"kube-apiserver-ci-3510.3.3-a-e21a461a74\" (UID: \"3e4868b618f0db5237f1e85b851ead58\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:30.935126 kubelet[2078]: I0412 18:31:30.935111 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4db9e5adda04a9e3cb4cc0abd85127c-kubeconfig\") pod \"kube-scheduler-ci-3510.3.3-a-e21a461a74\" (UID: \"d4db9e5adda04a9e3cb4cc0abd85127c\") " pod="kube-system/kube-scheduler-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:30.935236 kubelet[2078]: I0412 18:31:30.935225 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e4868b618f0db5237f1e85b851ead58-k8s-certs\") pod \"kube-apiserver-ci-3510.3.3-a-e21a461a74\" (UID: \"3e4868b618f0db5237f1e85b851ead58\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:30.935346 kubelet[2078]: I0412 18:31:30.935335 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e4868b618f0db5237f1e85b851ead58-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.3-a-e21a461a74\" (UID: \"3e4868b618f0db5237f1e85b851ead58\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:30.935444 kubelet[2078]: I0412 18:31:30.935434 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ead94c2c69c36521e0c1f95c4806670d-ca-certs\") pod \"kube-controller-manager-ci-3510.3.3-a-e21a461a74\" (UID: \"ead94c2c69c36521e0c1f95c4806670d\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:30.935539 kubelet[2078]: I0412 18:31:30.935528 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ead94c2c69c36521e0c1f95c4806670d-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.3-a-e21a461a74\" (UID: \"ead94c2c69c36521e0c1f95c4806670d\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:30.935638 kubelet[2078]: I0412 18:31:30.935628 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ead94c2c69c36521e0c1f95c4806670d-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.3-a-e21a461a74\" (UID: \"ead94c2c69c36521e0c1f95c4806670d\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:30.935744 kubelet[2078]: I0412 18:31:30.935733 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ead94c2c69c36521e0c1f95c4806670d-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.3-a-e21a461a74\" (UID: \"ead94c2c69c36521e0c1f95c4806670d\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:30.935885 kubelet[2078]: I0412 18:31:30.935848 2078 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ead94c2c69c36521e0c1f95c4806670d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.3-a-e21a461a74\" (UID: \"ead94c2c69c36521e0c1f95c4806670d\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:31.137150 kubelet[2078]: E0412 18:31:31.137043 2078 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="800ms" Apr 12 18:31:31.204201 env[1340]: time="2024-04-12T18:31:31.203876069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.3-a-e21a461a74,Uid:3e4868b618f0db5237f1e85b851ead58,Namespace:kube-system,Attempt:0,}" Apr 12 18:31:31.210003 env[1340]: time="2024-04-12T18:31:31.209961076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.3-a-e21a461a74,Uid:ead94c2c69c36521e0c1f95c4806670d,Namespace:kube-system,Attempt:0,}" Apr 12 18:31:31.217419 env[1340]: time="2024-04-12T18:31:31.217150226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.3-a-e21a461a74,Uid:d4db9e5adda04a9e3cb4cc0abd85127c,Namespace:kube-system,Attempt:0,}" Apr 12 18:31:31.261428 kubelet[2078]: I0412 18:31:31.261400 2078 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.3-a-e21a461a74" Apr 12 18:31:31.261899 kubelet[2078]: E0412 18:31:31.261877 2078 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-3510.3.3-a-e21a461a74" Apr 12 18:31:31.305470 kubelet[2078]: E0412 18:31:31.305444 2078 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.3-a-e21a461a74.17c59bf07fc406ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.3-a-e21a461a74,UID:ci-3510.3.3-a-e21a461a74,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.3-a-e21a461a74,},FirstTimestamp:2024-04-12 18:31:30.522351341 +0000 UTC m=+0.804707926,LastTimestamp:2024-04-12 18:31:30.522351341 +0000 UTC m=+0.804707926,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.3-a-e21a461a74,}" Apr 12 18:31:31.519749 kubelet[2078]: W0412 18:31:31.519670 2078 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Apr 12 18:31:31.519749 kubelet[2078]: E0412 18:31:31.519726 2078 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Apr 12 18:31:31.688726 kubelet[2078]: W0412 18:31:31.688668 2078 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.3-a-e21a461a74&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Apr 12 18:31:31.688726 kubelet[2078]: E0412 18:31:31.688727 2078 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.3-a-e21a461a74&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Apr 12 18:31:31.877203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2118912605.mount: Deactivated successfully. Apr 12 18:31:31.887998 kubelet[2078]: W0412 18:31:31.887936 2078 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Apr 12 18:31:31.887998 kubelet[2078]: E0412 18:31:31.887996 2078 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Apr 12 18:31:31.906759 env[1340]: time="2024-04-12T18:31:31.906705147Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:31.922647 env[1340]: time="2024-04-12T18:31:31.922593489Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:31.930293 env[1340]: time="2024-04-12T18:31:31.930250415Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:31.935226 env[1340]: time="2024-04-12T18:31:31.935191480Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:31.937683 kubelet[2078]: E0412 18:31:31.937659 2078 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="1.6s" Apr 12 18:31:31.940819 env[1340]: time="2024-04-12T18:31:31.940787912Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:31.944861 env[1340]: time="2024-04-12T18:31:31.944819904Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:31.949681 env[1340]: time="2024-04-12T18:31:31.949646535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:31.957551 env[1340]: time="2024-04-12T18:31:31.957519210Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:31.960966 env[1340]: time="2024-04-12T18:31:31.960925795Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:31.966209 env[1340]: time="2024-04-12T18:31:31.966179364Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:31.972100 env[1340]: time="2024-04-12T18:31:31.972047542Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:31.986324 env[1340]: time="2024-04-12T18:31:31.986286968Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:31:32.036661 kubelet[2078]: W0412 18:31:32.036574 2078 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Apr 12 18:31:32.036797 kubelet[2078]: E0412 18:31:32.036675 2078 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Apr 12 18:31:32.064466 kubelet[2078]: I0412 18:31:32.064132 2078 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.3-a-e21a461a74" Apr 12 18:31:32.064466 kubelet[2078]: E0412 18:31:32.064428 2078 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-3510.3.3-a-e21a461a74" Apr 12 18:31:32.097882 env[1340]: time="2024-04-12T18:31:32.097705610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:31:32.097882 env[1340]: time="2024-04-12T18:31:32.097752088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:31:32.097882 env[1340]: time="2024-04-12T18:31:32.097764127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:31:32.098177 env[1340]: time="2024-04-12T18:31:32.098123549Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d993091ee42cd187992439fef462b94488c2d6b4baff67d6b74bf1abf0073b5 pid=2118 runtime=io.containerd.runc.v2 Apr 12 18:31:32.108985 env[1340]: time="2024-04-12T18:31:32.108921164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:31:32.111519 systemd[1]: Started cri-containerd-9d993091ee42cd187992439fef462b94488c2d6b4baff67d6b74bf1abf0073b5.scope. Apr 12 18:31:32.116989 env[1340]: time="2024-04-12T18:31:32.112362631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:31:32.116989 env[1340]: time="2024-04-12T18:31:32.112386389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:31:32.116989 env[1340]: time="2024-04-12T18:31:32.112575060Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dfe8f876f1299a7cd8b532c4aff5a299ad27868c6611db4b0743661c7a493a53 pid=2139 runtime=io.containerd.runc.v2 Apr 12 18:31:32.125810 env[1340]: time="2024-04-12T18:31:32.125732036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:31:32.125994 env[1340]: time="2024-04-12T18:31:32.125970904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:31:32.126105 env[1340]: time="2024-04-12T18:31:32.126083298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:31:32.126757 env[1340]: time="2024-04-12T18:31:32.126708227Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/571f6262f9bfd3e2a03a37e683d06c3cf2501c772d291cef2e825a171dda5d60 pid=2162 runtime=io.containerd.runc.v2 Apr 12 18:31:32.140832 systemd[1]: Started cri-containerd-dfe8f876f1299a7cd8b532c4aff5a299ad27868c6611db4b0743661c7a493a53.scope. Apr 12 18:31:32.171696 systemd[1]: Started cri-containerd-571f6262f9bfd3e2a03a37e683d06c3cf2501c772d291cef2e825a171dda5d60.scope. Apr 12 18:31:32.174123 env[1340]: time="2024-04-12T18:31:32.174084436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.3-a-e21a461a74,Uid:3e4868b618f0db5237f1e85b851ead58,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d993091ee42cd187992439fef462b94488c2d6b4baff67d6b74bf1abf0073b5\"" Apr 12 18:31:32.183191 env[1340]: time="2024-04-12T18:31:32.183149579Z" level=info msg="CreateContainer within sandbox \"9d993091ee42cd187992439fef462b94488c2d6b4baff67d6b74bf1abf0073b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 12 18:31:32.193473 env[1340]: time="2024-04-12T18:31:32.193410901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.3-a-e21a461a74,Uid:ead94c2c69c36521e0c1f95c4806670d,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfe8f876f1299a7cd8b532c4aff5a299ad27868c6611db4b0743661c7a493a53\"" Apr 12 18:31:32.201268 env[1340]: time="2024-04-12T18:31:32.201222427Z" level=info msg="CreateContainer within sandbox \"dfe8f876f1299a7cd8b532c4aff5a299ad27868c6611db4b0743661c7a493a53\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 12 18:31:32.219055 env[1340]: time="2024-04-12T18:31:32.218999049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.3-a-e21a461a74,Uid:d4db9e5adda04a9e3cb4cc0abd85127c,Namespace:kube-system,Attempt:0,} returns sandbox id \"571f6262f9bfd3e2a03a37e683d06c3cf2501c772d291cef2e825a171dda5d60\"" Apr 12 18:31:32.222822 env[1340]: time="2024-04-12T18:31:32.222763220Z" level=info msg="CreateContainer within sandbox \"571f6262f9bfd3e2a03a37e683d06c3cf2501c772d291cef2e825a171dda5d60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 12 18:31:32.274554 env[1340]: time="2024-04-12T18:31:32.274498489Z" level=info msg="CreateContainer within sandbox \"9d993091ee42cd187992439fef462b94488c2d6b4baff67d6b74bf1abf0073b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"22001ba6c272f94784d75bb549891a6609d9a40fa1c9cf74b0a9d60d085e2606\"" Apr 12 18:31:32.275350 env[1340]: time="2024-04-12T18:31:32.275323407Z" level=info msg="StartContainer for \"22001ba6c272f94784d75bb549891a6609d9a40fa1c9cf74b0a9d60d085e2606\"" Apr 12 18:31:32.290899 systemd[1]: Started cri-containerd-22001ba6c272f94784d75bb549891a6609d9a40fa1c9cf74b0a9d60d085e2606.scope. Apr 12 18:31:32.304226 env[1340]: time="2024-04-12T18:31:32.304174991Z" level=info msg="CreateContainer within sandbox \"dfe8f876f1299a7cd8b532c4aff5a299ad27868c6611db4b0743661c7a493a53\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5f06b5eef7f39b1aedddd81de74797f3fe780062ec5a79be1c8ebc879df87f12\"" Apr 12 18:31:32.304640 env[1340]: time="2024-04-12T18:31:32.304610409Z" level=info msg="StartContainer for \"5f06b5eef7f39b1aedddd81de74797f3fe780062ec5a79be1c8ebc879df87f12\"" Apr 12 18:31:32.309208 env[1340]: time="2024-04-12T18:31:32.309155340Z" level=info msg="CreateContainer within sandbox \"571f6262f9bfd3e2a03a37e683d06c3cf2501c772d291cef2e825a171dda5d60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f8ea86a677dcfc926511bdd5f2ae48312a552595f24d7afe1fea43b99267f8fc\"" Apr 12 18:31:32.309655 env[1340]: time="2024-04-12T18:31:32.309627636Z" level=info msg="StartContainer for \"f8ea86a677dcfc926511bdd5f2ae48312a552595f24d7afe1fea43b99267f8fc\"" Apr 12 18:31:32.334598 systemd[1]: Started cri-containerd-5f06b5eef7f39b1aedddd81de74797f3fe780062ec5a79be1c8ebc879df87f12.scope. Apr 12 18:31:32.358605 systemd[1]: Started cri-containerd-f8ea86a677dcfc926511bdd5f2ae48312a552595f24d7afe1fea43b99267f8fc.scope. Apr 12 18:31:32.361492 env[1340]: time="2024-04-12T18:31:32.361434022Z" level=info msg="StartContainer for \"22001ba6c272f94784d75bb549891a6609d9a40fa1c9cf74b0a9d60d085e2606\" returns successfully" Apr 12 18:31:32.382516 env[1340]: time="2024-04-12T18:31:32.382472320Z" level=info msg="StartContainer for \"5f06b5eef7f39b1aedddd81de74797f3fe780062ec5a79be1c8ebc879df87f12\" returns successfully" Apr 12 18:31:32.423163 env[1340]: time="2024-04-12T18:31:32.423025874Z" level=info msg="StartContainer for \"f8ea86a677dcfc926511bdd5f2ae48312a552595f24d7afe1fea43b99267f8fc\" returns successfully" Apr 12 18:31:33.666529 kubelet[2078]: I0412 18:31:33.666503 2078 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.3-a-e21a461a74" Apr 12 18:31:34.430833 kubelet[2078]: I0412 18:31:34.430804 2078 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.3-a-e21a461a74" Apr 12 18:31:34.432814 kubelet[2078]: E0412 18:31:34.432789 2078 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.3-a-e21a461a74\" not found" node="ci-3510.3.3-a-e21a461a74" Apr 12 18:31:34.464994 kubelet[2078]: E0412 18:31:34.464950 2078 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.3-a-e21a461a74\" not found" Apr 12 18:31:35.524051 kubelet[2078]: I0412 18:31:35.524018 2078 apiserver.go:52] "Watching apiserver" Apr 12 18:31:35.533669 kubelet[2078]: I0412 18:31:35.533637 2078 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Apr 12 18:31:35.550125 kubelet[2078]: W0412 18:31:35.550098 2078 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 12 18:31:37.324517 systemd[1]: Reloading. Apr 12 18:31:37.404754 /usr/lib/systemd/system-generators/torcx-generator[2370]: time="2024-04-12T18:31:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:31:37.405163 /usr/lib/systemd/system-generators/torcx-generator[2370]: time="2024-04-12T18:31:37Z" level=info msg="torcx already run" Apr 12 18:31:37.504723 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:31:37.504744 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:31:37.520740 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:31:37.647753 systemd[1]: Stopping kubelet.service... Apr 12 18:31:37.659864 systemd[1]: kubelet.service: Deactivated successfully. Apr 12 18:31:37.660267 systemd[1]: Stopped kubelet.service. Apr 12 18:31:37.660417 systemd[1]: kubelet.service: Consumed 1.044s CPU time. Apr 12 18:31:37.663172 systemd[1]: Started kubelet.service. Apr 12 18:31:37.741385 kubelet[2429]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:31:37.742217 kubelet[2429]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:31:37.742283 kubelet[2429]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:31:37.742422 kubelet[2429]: I0412 18:31:37.742383 2429 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:31:37.746845 kubelet[2429]: I0412 18:31:37.746818 2429 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Apr 12 18:31:37.746845 kubelet[2429]: I0412 18:31:37.746843 2429 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:31:37.747023 kubelet[2429]: I0412 18:31:37.747003 2429 server.go:919] "Client rotation is on, will bootstrap in background" Apr 12 18:31:37.748537 kubelet[2429]: I0412 18:31:37.748510 2429 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 12 18:31:37.750372 kubelet[2429]: I0412 18:31:37.750342 2429 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:31:37.756288 kubelet[2429]: I0412 18:31:37.756268 2429 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:31:37.756473 kubelet[2429]: I0412 18:31:37.756460 2429 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:31:37.756637 kubelet[2429]: I0412 18:31:37.756618 2429 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 12 18:31:37.756637 kubelet[2429]: I0412 18:31:37.756637 2429 topology_manager.go:138] "Creating topology manager with none policy" Apr 12 18:31:37.756740 kubelet[2429]: I0412 18:31:37.756645 2429 container_manager_linux.go:301] "Creating device plugin manager" Apr 12 18:31:37.756740 kubelet[2429]: I0412 18:31:37.756672 2429 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:31:37.756804 kubelet[2429]: I0412 18:31:37.756766 2429 kubelet.go:396] "Attempting to sync node with API server" Apr 12 18:31:37.756804 kubelet[2429]: I0412 18:31:37.756778 2429 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:31:37.756847 kubelet[2429]: I0412 18:31:37.756811 2429 kubelet.go:312] "Adding apiserver pod source" Apr 12 18:31:37.756847 kubelet[2429]: I0412 18:31:37.756825 2429 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:31:37.769100 kubelet[2429]: I0412 18:31:37.765128 2429 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:31:37.769100 kubelet[2429]: I0412 18:31:37.765310 2429 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 12 18:31:37.769100 kubelet[2429]: I0412 18:31:37.765689 2429 server.go:1256] "Started kubelet" Apr 12 18:31:37.769100 kubelet[2429]: I0412 18:31:37.768148 2429 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:31:37.769100 kubelet[2429]: I0412 18:31:37.768509 2429 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:31:37.769387 kubelet[2429]: I0412 18:31:37.769350 2429 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:31:37.770492 kubelet[2429]: I0412 18:31:37.770459 2429 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 12 18:31:37.770634 kubelet[2429]: I0412 18:31:37.770615 2429 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 12 18:31:37.772057 kubelet[2429]: I0412 18:31:37.772038 2429 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 12 18:31:37.772436 kubelet[2429]: I0412 18:31:37.772398 2429 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Apr 12 18:31:37.772541 kubelet[2429]: I0412 18:31:37.772525 2429 reconciler_new.go:29] "Reconciler: start to sync state" Apr 12 18:31:37.776091 kubelet[2429]: I0412 18:31:37.775265 2429 factory.go:221] Registration of the systemd container factory successfully Apr 12 18:31:37.776091 kubelet[2429]: I0412 18:31:37.775363 2429 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 12 18:31:37.803131 sudo[2447]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 12 18:31:37.803329 sudo[2447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Apr 12 18:31:37.814347 kubelet[2429]: I0412 18:31:37.814321 2429 factory.go:221] Registration of the containerd container factory successfully Apr 12 18:31:37.817144 kubelet[2429]: I0412 18:31:37.817124 2429 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 12 18:31:37.820700 kubelet[2429]: I0412 18:31:37.820681 2429 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 12 18:31:37.820816 kubelet[2429]: I0412 18:31:37.820806 2429 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 12 18:31:37.820885 kubelet[2429]: I0412 18:31:37.820876 2429 kubelet.go:2329] "Starting kubelet main sync loop" Apr 12 18:31:37.820976 kubelet[2429]: E0412 18:31:37.820966 2429 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:31:37.875581 kubelet[2429]: I0412 18:31:37.875560 2429 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.3-a-e21a461a74" Apr 12 18:31:37.891137 kubelet[2429]: I0412 18:31:37.891112 2429 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:31:37.891420 kubelet[2429]: I0412 18:31:37.891406 2429 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:31:37.891492 kubelet[2429]: I0412 18:31:37.891484 2429 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:31:37.891695 kubelet[2429]: I0412 18:31:37.891684 2429 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 12 18:31:37.891771 kubelet[2429]: I0412 18:31:37.891761 2429 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 12 18:31:37.892271 kubelet[2429]: I0412 18:31:37.892254 2429 policy_none.go:49] "None policy: Start" Apr 12 18:31:37.892427 kubelet[2429]: I0412 18:31:37.891184 2429 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.3-a-e21a461a74" Apr 12 18:31:37.892549 kubelet[2429]: I0412 18:31:37.892538 2429 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.3-a-e21a461a74" Apr 12 18:31:37.893286 kubelet[2429]: I0412 18:31:37.893272 2429 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 12 18:31:37.893391 kubelet[2429]: I0412 18:31:37.893382 2429 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:31:37.896017 kubelet[2429]: I0412 18:31:37.895998 2429 state_mem.go:75] "Updated machine memory state" Apr 12 18:31:37.904006 kubelet[2429]: I0412 18:31:37.902223 2429 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:31:37.905424 kubelet[2429]: I0412 18:31:37.905400 2429 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:31:37.921382 kubelet[2429]: I0412 18:31:37.921355 2429 topology_manager.go:215] "Topology Admit Handler" podUID="d4db9e5adda04a9e3cb4cc0abd85127c" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:37.921828 kubelet[2429]: I0412 18:31:37.921811 2429 topology_manager.go:215] "Topology Admit Handler" podUID="3e4868b618f0db5237f1e85b851ead58" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:37.922389 kubelet[2429]: I0412 18:31:37.922359 2429 topology_manager.go:215] "Topology Admit Handler" podUID="ead94c2c69c36521e0c1f95c4806670d" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:37.933114 kubelet[2429]: W0412 18:31:37.933037 2429 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 12 18:31:37.933869 kubelet[2429]: E0412 18:31:37.933851 2429 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.3-a-e21a461a74\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:37.934173 kubelet[2429]: W0412 18:31:37.934161 2429 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 12 18:31:37.937161 kubelet[2429]: W0412 18:31:37.937054 2429 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 12 18:31:37.973617 kubelet[2429]: I0412 18:31:37.973589 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4db9e5adda04a9e3cb4cc0abd85127c-kubeconfig\") pod \"kube-scheduler-ci-3510.3.3-a-e21a461a74\" (UID: \"d4db9e5adda04a9e3cb4cc0abd85127c\") " pod="kube-system/kube-scheduler-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:37.973833 kubelet[2429]: I0412 18:31:37.973820 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e4868b618f0db5237f1e85b851ead58-ca-certs\") pod \"kube-apiserver-ci-3510.3.3-a-e21a461a74\" (UID: \"3e4868b618f0db5237f1e85b851ead58\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:37.973952 kubelet[2429]: I0412 18:31:37.973942 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e4868b618f0db5237f1e85b851ead58-k8s-certs\") pod \"kube-apiserver-ci-3510.3.3-a-e21a461a74\" (UID: \"3e4868b618f0db5237f1e85b851ead58\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:37.974057 kubelet[2429]: I0412 18:31:37.974048 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ead94c2c69c36521e0c1f95c4806670d-ca-certs\") pod \"kube-controller-manager-ci-3510.3.3-a-e21a461a74\" (UID: \"ead94c2c69c36521e0c1f95c4806670d\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:37.974194 kubelet[2429]: I0412 18:31:37.974185 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ead94c2c69c36521e0c1f95c4806670d-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.3-a-e21a461a74\" (UID: \"ead94c2c69c36521e0c1f95c4806670d\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:37.974299 kubelet[2429]: I0412 18:31:37.974290 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ead94c2c69c36521e0c1f95c4806670d-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.3-a-e21a461a74\" (UID: \"ead94c2c69c36521e0c1f95c4806670d\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:37.974413 kubelet[2429]: I0412 18:31:37.974403 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e4868b618f0db5237f1e85b851ead58-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.3-a-e21a461a74\" (UID: \"3e4868b618f0db5237f1e85b851ead58\") " pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:37.974550 kubelet[2429]: I0412 18:31:37.974503 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ead94c2c69c36521e0c1f95c4806670d-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.3-a-e21a461a74\" (UID: \"ead94c2c69c36521e0c1f95c4806670d\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:37.974597 kubelet[2429]: I0412 18:31:37.974554 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ead94c2c69c36521e0c1f95c4806670d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.3-a-e21a461a74\" (UID: \"ead94c2c69c36521e0c1f95c4806670d\") " pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:38.355395 sudo[2447]: pam_unix(sudo:session): session closed for user root Apr 12 18:31:38.757470 kubelet[2429]: I0412 18:31:38.757449 2429 apiserver.go:52] "Watching apiserver" Apr 12 18:31:38.772937 kubelet[2429]: I0412 18:31:38.772899 2429 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Apr 12 18:31:38.876894 kubelet[2429]: W0412 18:31:38.876850 2429 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 12 18:31:38.877044 kubelet[2429]: E0412 18:31:38.876925 2429 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.3-a-e21a461a74\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" Apr 12 18:31:38.892246 kubelet[2429]: I0412 18:31:38.892212 2429 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" podStartSLOduration=3.892164984 podStartE2EDuration="3.892164984s" podCreationTimestamp="2024-04-12 18:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:31:38.890291188 +0000 UTC m=+1.216722244" watchObservedRunningTime="2024-04-12 18:31:38.892164984 +0000 UTC m=+1.218596040" Apr 12 18:31:38.911146 kubelet[2429]: I0412 18:31:38.911106 2429 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" podStartSLOduration=1.911068816 podStartE2EDuration="1.911068816s" podCreationTimestamp="2024-04-12 18:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:31:38.908380057 +0000 UTC m=+1.234811113" watchObservedRunningTime="2024-04-12 18:31:38.911068816 +0000 UTC m=+1.237499872" Apr 12 18:31:38.911332 kubelet[2429]: I0412 18:31:38.911233 2429 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.3-a-e21a461a74" podStartSLOduration=1.91121761 podStartE2EDuration="1.91121761s" podCreationTimestamp="2024-04-12 18:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:31:38.901748394 +0000 UTC m=+1.228179450" watchObservedRunningTime="2024-04-12 18:31:38.91121761 +0000 UTC m=+1.237648666" Apr 12 18:31:39.967321 sudo[1668]: pam_unix(sudo:session): session closed for user root Apr 12 18:31:40.043693 sshd[1665]: pam_unix(sshd:session): session closed for user core Apr 12 18:31:40.046577 systemd-logind[1326]: Session 7 logged out. Waiting for processes to exit. Apr 12 18:31:40.046975 systemd[1]: sshd@4-10.200.20.12:22-10.200.12.6:33144.service: Deactivated successfully. Apr 12 18:31:40.047684 systemd[1]: session-7.scope: Deactivated successfully. Apr 12 18:31:40.047842 systemd[1]: session-7.scope: Consumed 7.028s CPU time. Apr 12 18:31:40.048655 systemd-logind[1326]: Removed session 7. Apr 12 18:31:50.130937 kubelet[2429]: I0412 18:31:50.130892 2429 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 12 18:31:50.131673 env[1340]: time="2024-04-12T18:31:50.131637712Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 12 18:31:50.132199 kubelet[2429]: I0412 18:31:50.132170 2429 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 12 18:31:50.231881 kubelet[2429]: I0412 18:31:50.231840 2429 topology_manager.go:215] "Topology Admit Handler" podUID="51da4e98-db2d-45e1-8323-3d39bea7d282" podNamespace="kube-system" podName="kube-proxy-4rcjm" Apr 12 18:31:50.237322 systemd[1]: Created slice kubepods-besteffort-pod51da4e98_db2d_45e1_8323_3d39bea7d282.slice. Apr 12 18:31:50.238517 kubelet[2429]: I0412 18:31:50.238483 2429 topology_manager.go:215] "Topology Admit Handler" podUID="c2722697-b602-41d9-8a60-d0b138d8039e" podNamespace="kube-system" podName="cilium-6p4rb" Apr 12 18:31:50.250640 systemd[1]: Created slice kubepods-burstable-podc2722697_b602_41d9_8a60_d0b138d8039e.slice. Apr 12 18:31:50.257983 kubelet[2429]: W0412 18:31:50.257934 2429 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.3-a-e21a461a74" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-e21a461a74' and this object Apr 12 18:31:50.257983 kubelet[2429]: E0412 18:31:50.257982 2429 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.3-a-e21a461a74" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-e21a461a74' and this object Apr 12 18:31:50.258157 kubelet[2429]: W0412 18:31:50.258027 2429 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.3-a-e21a461a74" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-e21a461a74' and this object Apr 12 18:31:50.258157 kubelet[2429]: E0412 18:31:50.258037 2429 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.3-a-e21a461a74" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-e21a461a74' and this object Apr 12 18:31:50.259595 kubelet[2429]: W0412 18:31:50.259562 2429 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.3-a-e21a461a74" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-e21a461a74' and this object Apr 12 18:31:50.259595 kubelet[2429]: E0412 18:31:50.259590 2429 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.3-a-e21a461a74" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-e21a461a74' and this object Apr 12 18:31:50.259749 kubelet[2429]: W0412 18:31:50.259732 2429 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.3-a-e21a461a74" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-e21a461a74' and this object Apr 12 18:31:50.259815 kubelet[2429]: E0412 18:31:50.259805 2429 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.3-a-e21a461a74" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-e21a461a74' and this object Apr 12 18:31:50.260141 kubelet[2429]: W0412 18:31:50.260113 2429 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.3-a-e21a461a74" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-e21a461a74' and this object Apr 12 18:31:50.260141 kubelet[2429]: E0412 18:31:50.260141 2429 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.3-a-e21a461a74" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.3-a-e21a461a74' and this object Apr 12 18:31:50.335163 kubelet[2429]: I0412 18:31:50.335128 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsf5b\" (UniqueName: \"kubernetes.io/projected/51da4e98-db2d-45e1-8323-3d39bea7d282-kube-api-access-jsf5b\") pod \"kube-proxy-4rcjm\" (UID: \"51da4e98-db2d-45e1-8323-3d39bea7d282\") " pod="kube-system/kube-proxy-4rcjm" Apr 12 18:31:50.335311 kubelet[2429]: I0412 18:31:50.335214 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-hostproc\") pod \"cilium-6p4rb\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " pod="kube-system/cilium-6p4rb" Apr 12 18:31:50.335311 kubelet[2429]: I0412 18:31:50.335238 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-cni-path\") pod \"cilium-6p4rb\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " pod="kube-system/cilium-6p4rb" Apr 12 18:31:50.335311 kubelet[2429]: I0412 18:31:50.335284 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2722697-b602-41d9-8a60-d0b138d8039e-clustermesh-secrets\") pod \"cilium-6p4rb\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " pod="kube-system/cilium-6p4rb" Apr 12 18:31:50.335311 kubelet[2429]: I0412 18:31:50.335307 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51da4e98-db2d-45e1-8323-3d39bea7d282-lib-modules\") pod \"kube-proxy-4rcjm\" (UID: \"51da4e98-db2d-45e1-8323-3d39bea7d282\") " pod="kube-system/kube-proxy-4rcjm" Apr 12 18:31:50.335414 kubelet[2429]: I0412 18:31:50.335360 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-bpf-maps\") pod \"cilium-6p4rb\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " pod="kube-system/cilium-6p4rb" Apr 12 18:31:50.335414 kubelet[2429]: I0412 18:31:50.335385 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-host-proc-sys-kernel\") pod \"cilium-6p4rb\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " pod="kube-system/cilium-6p4rb" Apr 12 18:31:50.335465 kubelet[2429]: I0412 18:31:50.335431 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-etc-cni-netd\") pod \"cilium-6p4rb\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " pod="kube-system/cilium-6p4rb" Apr 12 18:31:50.335465 kubelet[2429]: I0412 18:31:50.335452 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjx9m\" (UniqueName: \"kubernetes.io/projected/c2722697-b602-41d9-8a60-d0b138d8039e-kube-api-access-bjx9m\") pod \"cilium-6p4rb\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " pod="kube-system/cilium-6p4rb" Apr 12 18:31:50.335510 kubelet[2429]: I0412 18:31:50.335471 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51da4e98-db2d-45e1-8323-3d39bea7d282-xtables-lock\") pod \"kube-proxy-4rcjm\" (UID: \"51da4e98-db2d-45e1-8323-3d39bea7d282\") " pod="kube-system/kube-proxy-4rcjm" Apr 12 18:31:50.335535 kubelet[2429]: I0412 18:31:50.335510 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-xtables-lock\") pod \"cilium-6p4rb\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " pod="kube-system/cilium-6p4rb" Apr 12 18:31:50.335535 kubelet[2429]: I0412 18:31:50.335532 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2722697-b602-41d9-8a60-d0b138d8039e-cilium-config-path\") pod \"cilium-6p4rb\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " pod="kube-system/cilium-6p4rb" Apr 12 18:31:50.335596 kubelet[2429]: I0412 18:31:50.335578 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2722697-b602-41d9-8a60-d0b138d8039e-hubble-tls\") pod \"cilium-6p4rb\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " pod="kube-system/cilium-6p4rb" Apr 12 18:31:50.335622 kubelet[2429]: I0412 18:31:50.335601 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-cilium-cgroup\") pod \"cilium-6p4rb\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " pod="kube-system/cilium-6p4rb" Apr 12 18:31:50.335622 kubelet[2429]: I0412 18:31:50.335618 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-lib-modules\") pod \"cilium-6p4rb\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " pod="kube-system/cilium-6p4rb" Apr 12 18:31:50.335682 kubelet[2429]: I0412 18:31:50.335665 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-host-proc-sys-net\") pod \"cilium-6p4rb\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " pod="kube-system/cilium-6p4rb" Apr 12 18:31:50.335713 kubelet[2429]: I0412 18:31:50.335691 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/51da4e98-db2d-45e1-8323-3d39bea7d282-kube-proxy\") pod \"kube-proxy-4rcjm\" (UID: \"51da4e98-db2d-45e1-8323-3d39bea7d282\") " pod="kube-system/kube-proxy-4rcjm" Apr 12 18:31:50.335752 kubelet[2429]: I0412 18:31:50.335738 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-cilium-run\") pod \"cilium-6p4rb\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " pod="kube-system/cilium-6p4rb" Apr 12 18:31:50.417695 kubelet[2429]: I0412 18:31:50.417572 2429 topology_manager.go:215] "Topology Admit Handler" podUID="d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5" podNamespace="kube-system" podName="cilium-operator-5cc964979-c6c7v" Apr 12 18:31:50.423835 systemd[1]: Created slice kubepods-besteffort-podd01d15c0_24d1_4d66_8f36_2ff74cf8c3f5.slice. Apr 12 18:31:50.436197 kubelet[2429]: I0412 18:31:50.436145 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bqww\" (UniqueName: \"kubernetes.io/projected/d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5-kube-api-access-2bqww\") pod \"cilium-operator-5cc964979-c6c7v\" (UID: \"d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5\") " pod="kube-system/cilium-operator-5cc964979-c6c7v" Apr 12 18:31:50.436348 kubelet[2429]: I0412 18:31:50.436243 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5-cilium-config-path\") pod \"cilium-operator-5cc964979-c6c7v\" (UID: \"d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5\") " pod="kube-system/cilium-operator-5cc964979-c6c7v" Apr 12 18:31:51.437572 kubelet[2429]: E0412 18:31:51.437544 2429 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Apr 12 18:31:51.437941 kubelet[2429]: E0412 18:31:51.437927 2429 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-6p4rb: failed to sync secret cache: timed out waiting for the condition Apr 12 18:31:51.438090 kubelet[2429]: E0412 18:31:51.437540 2429 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Apr 12 18:31:51.438139 kubelet[2429]: E0412 18:31:51.437557 2429 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:31:51.438139 kubelet[2429]: E0412 18:31:51.437566 2429 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:31:51.438214 kubelet[2429]: E0412 18:31:51.438202 2429 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2722697-b602-41d9-8a60-d0b138d8039e-hubble-tls podName:c2722697-b602-41d9-8a60-d0b138d8039e nodeName:}" failed. No retries permitted until 2024-04-12 18:31:51.938039283 +0000 UTC m=+14.264470339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/c2722697-b602-41d9-8a60-d0b138d8039e-hubble-tls") pod "cilium-6p4rb" (UID: "c2722697-b602-41d9-8a60-d0b138d8039e") : failed to sync secret cache: timed out waiting for the condition Apr 12 18:31:51.438308 kubelet[2429]: E0412 18:31:51.438298 2429 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2722697-b602-41d9-8a60-d0b138d8039e-clustermesh-secrets podName:c2722697-b602-41d9-8a60-d0b138d8039e nodeName:}" failed. No retries permitted until 2024-04-12 18:31:51.938284314 +0000 UTC m=+14.264715370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/c2722697-b602-41d9-8a60-d0b138d8039e-clustermesh-secrets") pod "cilium-6p4rb" (UID: "c2722697-b602-41d9-8a60-d0b138d8039e") : failed to sync secret cache: timed out waiting for the condition Apr 12 18:31:51.438389 kubelet[2429]: E0412 18:31:51.438379 2429 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/51da4e98-db2d-45e1-8323-3d39bea7d282-kube-proxy podName:51da4e98-db2d-45e1-8323-3d39bea7d282 nodeName:}" failed. No retries permitted until 2024-04-12 18:31:51.938368991 +0000 UTC m=+14.264800047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/51da4e98-db2d-45e1-8323-3d39bea7d282-kube-proxy") pod "kube-proxy-4rcjm" (UID: "51da4e98-db2d-45e1-8323-3d39bea7d282") : failed to sync configmap cache: timed out waiting for the condition Apr 12 18:31:51.438540 kubelet[2429]: E0412 18:31:51.438526 2429 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2722697-b602-41d9-8a60-d0b138d8039e-cilium-config-path podName:c2722697-b602-41d9-8a60-d0b138d8039e nodeName:}" failed. No retries permitted until 2024-04-12 18:31:51.938508826 +0000 UTC m=+14.264939842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/c2722697-b602-41d9-8a60-d0b138d8039e-cilium-config-path") pod "cilium-6p4rb" (UID: "c2722697-b602-41d9-8a60-d0b138d8039e") : failed to sync configmap cache: timed out waiting for the condition Apr 12 18:31:51.450106 kubelet[2429]: E0412 18:31:51.450076 2429 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:31:51.450106 kubelet[2429]: E0412 18:31:51.450107 2429 projected.go:200] Error preparing data for projected volume kube-api-access-bjx9m for pod kube-system/cilium-6p4rb: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:31:51.450242 kubelet[2429]: E0412 18:31:51.450154 2429 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2722697-b602-41d9-8a60-d0b138d8039e-kube-api-access-bjx9m podName:c2722697-b602-41d9-8a60-d0b138d8039e nodeName:}" failed. No retries permitted until 2024-04-12 18:31:51.95013889 +0000 UTC m=+14.276569946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bjx9m" (UniqueName: "kubernetes.io/projected/c2722697-b602-41d9-8a60-d0b138d8039e-kube-api-access-bjx9m") pod "cilium-6p4rb" (UID: "c2722697-b602-41d9-8a60-d0b138d8039e") : failed to sync configmap cache: timed out waiting for the condition Apr 12 18:31:51.459071 kubelet[2429]: E0412 18:31:51.459034 2429 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:31:51.459071 kubelet[2429]: E0412 18:31:51.459075 2429 projected.go:200] Error preparing data for projected volume kube-api-access-jsf5b for pod kube-system/kube-proxy-4rcjm: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:31:51.460408 kubelet[2429]: E0412 18:31:51.459121 2429 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/51da4e98-db2d-45e1-8323-3d39bea7d282-kube-api-access-jsf5b podName:51da4e98-db2d-45e1-8323-3d39bea7d282 nodeName:}" failed. No retries permitted until 2024-04-12 18:31:51.959105729 +0000 UTC m=+14.285536745 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jsf5b" (UniqueName: "kubernetes.io/projected/51da4e98-db2d-45e1-8323-3d39bea7d282-kube-api-access-jsf5b") pod "kube-proxy-4rcjm" (UID: "51da4e98-db2d-45e1-8323-3d39bea7d282") : failed to sync configmap cache: timed out waiting for the condition Apr 12 18:31:51.537466 kubelet[2429]: E0412 18:31:51.537431 2429 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:31:51.537699 kubelet[2429]: E0412 18:31:51.537686 2429 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5-cilium-config-path podName:d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5 nodeName:}" failed. No retries permitted until 2024-04-12 18:31:52.037664039 +0000 UTC m=+14.364095095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5-cilium-config-path") pod "cilium-operator-5cc964979-c6c7v" (UID: "d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5") : failed to sync configmap cache: timed out waiting for the condition Apr 12 18:31:51.546543 kubelet[2429]: E0412 18:31:51.546505 2429 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:31:51.546543 kubelet[2429]: E0412 18:31:51.546543 2429 projected.go:200] Error preparing data for projected volume kube-api-access-2bqww for pod kube-system/cilium-operator-5cc964979-c6c7v: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:31:51.546743 kubelet[2429]: E0412 18:31:51.546597 2429 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5-kube-api-access-2bqww podName:d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5 nodeName:}" failed. No retries permitted until 2024-04-12 18:31:52.04658068 +0000 UTC m=+14.373011696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2bqww" (UniqueName: "kubernetes.io/projected/d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5-kube-api-access-2bqww") pod "cilium-operator-5cc964979-c6c7v" (UID: "d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5") : failed to sync configmap cache: timed out waiting for the condition Apr 12 18:31:52.227614 env[1340]: time="2024-04-12T18:31:52.227212179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-c6c7v,Uid:d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5,Namespace:kube-system,Attempt:0,}" Apr 12 18:31:52.262628 env[1340]: time="2024-04-12T18:31:52.262547735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:31:52.262628 env[1340]: time="2024-04-12T18:31:52.262592813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:31:52.262851 env[1340]: time="2024-04-12T18:31:52.262610373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:31:52.262901 env[1340]: time="2024-04-12T18:31:52.262867164Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d pid=2511 runtime=io.containerd.runc.v2 Apr 12 18:31:52.272789 systemd[1]: Started cri-containerd-9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d.scope. Apr 12 18:31:52.309522 env[1340]: time="2024-04-12T18:31:52.309458923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-c6c7v,Uid:d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d\"" Apr 12 18:31:52.313512 env[1340]: time="2024-04-12T18:31:52.313472261Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 12 18:31:52.344824 env[1340]: time="2024-04-12T18:31:52.344788639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4rcjm,Uid:51da4e98-db2d-45e1-8323-3d39bea7d282,Namespace:kube-system,Attempt:0,}" Apr 12 18:31:52.356734 env[1340]: time="2024-04-12T18:31:52.356683340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6p4rb,Uid:c2722697-b602-41d9-8a60-d0b138d8039e,Namespace:kube-system,Attempt:0,}" Apr 12 18:31:52.410751 env[1340]: time="2024-04-12T18:31:52.410439007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:31:52.410751 env[1340]: time="2024-04-12T18:31:52.410481285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:31:52.410751 env[1340]: time="2024-04-12T18:31:52.410491645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:31:52.410751 env[1340]: time="2024-04-12T18:31:52.410616640Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/54cc1a5612e7712df5c8c93003ee0c83056ba460a5e08b2a183ca5fa6ba05792 pid=2552 runtime=io.containerd.runc.v2 Apr 12 18:31:52.416155 env[1340]: time="2024-04-12T18:31:52.416053929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:31:52.416282 env[1340]: time="2024-04-12T18:31:52.416163845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:31:52.416282 env[1340]: time="2024-04-12T18:31:52.416189124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:31:52.416353 env[1340]: time="2024-04-12T18:31:52.416319959Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0 pid=2569 runtime=io.containerd.runc.v2 Apr 12 18:31:52.421952 systemd[1]: Started cri-containerd-54cc1a5612e7712df5c8c93003ee0c83056ba460a5e08b2a183ca5fa6ba05792.scope. Apr 12 18:31:52.461752 systemd[1]: Started cri-containerd-87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0.scope. Apr 12 18:31:52.470580 env[1340]: time="2024-04-12T18:31:52.470540730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4rcjm,Uid:51da4e98-db2d-45e1-8323-3d39bea7d282,Namespace:kube-system,Attempt:0,} returns sandbox id \"54cc1a5612e7712df5c8c93003ee0c83056ba460a5e08b2a183ca5fa6ba05792\"" Apr 12 18:31:52.473412 env[1340]: time="2024-04-12T18:31:52.473380230Z" level=info msg="CreateContainer within sandbox \"54cc1a5612e7712df5c8c93003ee0c83056ba460a5e08b2a183ca5fa6ba05792\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 12 18:31:52.498763 env[1340]: time="2024-04-12T18:31:52.498652260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6p4rb,Uid:c2722697-b602-41d9-8a60-d0b138d8039e,Namespace:kube-system,Attempt:0,} returns sandbox id \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\"" Apr 12 18:31:52.526144 env[1340]: time="2024-04-12T18:31:52.526091334Z" level=info msg="CreateContainer within sandbox \"54cc1a5612e7712df5c8c93003ee0c83056ba460a5e08b2a183ca5fa6ba05792\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ec9a2aea4a27420cb469446ec98508b2aadad4169b136c9d5b15e22bb647fabf\"" Apr 12 18:31:52.526934 env[1340]: time="2024-04-12T18:31:52.526905465Z" level=info msg="StartContainer for \"ec9a2aea4a27420cb469446ec98508b2aadad4169b136c9d5b15e22bb647fabf\"" Apr 12 18:31:52.542547 systemd[1]: Started cri-containerd-ec9a2aea4a27420cb469446ec98508b2aadad4169b136c9d5b15e22bb647fabf.scope. Apr 12 18:31:52.577440 env[1340]: time="2024-04-12T18:31:52.577367208Z" level=info msg="StartContainer for \"ec9a2aea4a27420cb469446ec98508b2aadad4169b136c9d5b15e22bb647fabf\" returns successfully" Apr 12 18:32:02.480999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount616339479.mount: Deactivated successfully. Apr 12 18:32:03.230790 env[1340]: time="2024-04-12T18:32:03.230737579Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:32:03.239153 env[1340]: time="2024-04-12T18:32:03.239115326Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:32:03.244522 env[1340]: time="2024-04-12T18:32:03.244484405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:32:03.245153 env[1340]: time="2024-04-12T18:32:03.245120625Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 12 18:32:03.249302 env[1340]: time="2024-04-12T18:32:03.248542122Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 12 18:32:03.250452 env[1340]: time="2024-04-12T18:32:03.250421186Z" level=info msg="CreateContainer within sandbox \"9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 12 18:32:03.282479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2543990034.mount: Deactivated successfully. Apr 12 18:32:03.297697 env[1340]: time="2024-04-12T18:32:03.297646562Z" level=info msg="CreateContainer within sandbox \"9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd\"" Apr 12 18:32:03.299291 env[1340]: time="2024-04-12T18:32:03.299259673Z" level=info msg="StartContainer for \"373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd\"" Apr 12 18:32:03.313933 systemd[1]: Started cri-containerd-373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd.scope. Apr 12 18:32:03.344532 env[1340]: time="2024-04-12T18:32:03.344467110Z" level=info msg="StartContainer for \"373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd\" returns successfully" Apr 12 18:32:03.981670 kubelet[2429]: I0412 18:32:03.981637 2429 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-c6c7v" podStartSLOduration=3.046478775 podStartE2EDuration="13.981563502s" podCreationTimestamp="2024-04-12 18:31:50 +0000 UTC" firstStartedPulling="2024-04-12 18:31:52.31096983 +0000 UTC m=+14.637400846" lastFinishedPulling="2024-04-12 18:32:03.246054437 +0000 UTC m=+25.572485573" observedRunningTime="2024-04-12 18:32:03.972489935 +0000 UTC m=+26.298920991" watchObservedRunningTime="2024-04-12 18:32:03.981563502 +0000 UTC m=+26.307994558" Apr 12 18:32:03.982329 kubelet[2429]: I0412 18:32:03.982299 2429 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4rcjm" podStartSLOduration=13.982272161000001 podStartE2EDuration="13.982272161s" podCreationTimestamp="2024-04-12 18:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:31:52.904342173 +0000 UTC m=+15.230773189" watchObservedRunningTime="2024-04-12 18:32:03.982272161 +0000 UTC m=+26.308703217" Apr 12 18:32:08.133247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133758023.mount: Deactivated successfully. Apr 12 18:32:10.552282 env[1340]: time="2024-04-12T18:32:10.552230884Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:32:10.564189 env[1340]: time="2024-04-12T18:32:10.564146473Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:32:10.569093 env[1340]: time="2024-04-12T18:32:10.569043577Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:32:10.569757 env[1340]: time="2024-04-12T18:32:10.569724198Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 12 18:32:10.573650 env[1340]: time="2024-04-12T18:32:10.573619250Z" level=info msg="CreateContainer within sandbox \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:32:10.610566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3804002914.mount: Deactivated successfully. Apr 12 18:32:10.614991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2641151823.mount: Deactivated successfully. Apr 12 18:32:10.631516 env[1340]: time="2024-04-12T18:32:10.631461206Z" level=info msg="CreateContainer within sandbox \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7\"" Apr 12 18:32:10.634206 env[1340]: time="2024-04-12T18:32:10.634170731Z" level=info msg="StartContainer for \"0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7\"" Apr 12 18:32:10.653923 systemd[1]: Started cri-containerd-0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7.scope. Apr 12 18:32:10.691040 env[1340]: time="2024-04-12T18:32:10.690981955Z" level=info msg="StartContainer for \"0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7\" returns successfully" Apr 12 18:32:10.695392 systemd[1]: cri-containerd-0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7.scope: Deactivated successfully. Apr 12 18:32:11.608802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7-rootfs.mount: Deactivated successfully. Apr 12 18:32:12.341005 env[1340]: time="2024-04-12T18:32:12.339899010Z" level=info msg="shim disconnected" id=0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7 Apr 12 18:32:12.341005 env[1340]: time="2024-04-12T18:32:12.339968728Z" level=warning msg="cleaning up after shim disconnected" id=0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7 namespace=k8s.io Apr 12 18:32:12.341005 env[1340]: time="2024-04-12T18:32:12.339978768Z" level=info msg="cleaning up dead shim" Apr 12 18:32:12.347345 env[1340]: time="2024-04-12T18:32:12.347297209Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:32:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2869 runtime=io.containerd.runc.v2\ntime=\"2024-04-12T18:32:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Apr 12 18:32:12.942279 env[1340]: time="2024-04-12T18:32:12.942239664Z" level=info msg="CreateContainer within sandbox \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:32:12.985005 env[1340]: time="2024-04-12T18:32:12.984955105Z" level=info msg="CreateContainer within sandbox \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f\"" Apr 12 18:32:12.985759 env[1340]: time="2024-04-12T18:32:12.985732164Z" level=info msg="StartContainer for \"8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f\"" Apr 12 18:32:13.008912 systemd[1]: run-containerd-runc-k8s.io-8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f-runc.VymgZk.mount: Deactivated successfully. Apr 12 18:32:13.012613 systemd[1]: Started cri-containerd-8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f.scope. Apr 12 18:32:13.040278 env[1340]: time="2024-04-12T18:32:13.040219737Z" level=info msg="StartContainer for \"8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f\" returns successfully" Apr 12 18:32:13.047703 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:32:13.047892 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:32:13.048084 systemd[1]: Stopping systemd-sysctl.service... Apr 12 18:32:13.049942 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:32:13.055256 systemd[1]: cri-containerd-8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f.scope: Deactivated successfully. Apr 12 18:32:13.059565 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:32:13.093436 env[1340]: time="2024-04-12T18:32:13.093389989Z" level=info msg="shim disconnected" id=8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f Apr 12 18:32:13.093681 env[1340]: time="2024-04-12T18:32:13.093662821Z" level=warning msg="cleaning up after shim disconnected" id=8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f namespace=k8s.io Apr 12 18:32:13.093760 env[1340]: time="2024-04-12T18:32:13.093746619Z" level=info msg="cleaning up dead shim" Apr 12 18:32:13.101028 env[1340]: time="2024-04-12T18:32:13.100989225Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:32:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2934 runtime=io.containerd.runc.v2\n" Apr 12 18:32:13.942989 env[1340]: time="2024-04-12T18:32:13.942540747Z" level=info msg="CreateContainer within sandbox \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:32:13.968941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f-rootfs.mount: Deactivated successfully. Apr 12 18:32:14.072128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3382598659.mount: Deactivated successfully. Apr 12 18:32:14.221663 env[1340]: time="2024-04-12T18:32:14.221606475Z" level=info msg="CreateContainer within sandbox \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d\"" Apr 12 18:32:14.222716 env[1340]: time="2024-04-12T18:32:14.222667166Z" level=info msg="StartContainer for \"9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d\"" Apr 12 18:32:14.242038 systemd[1]: Started cri-containerd-9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d.scope. Apr 12 18:32:14.274755 systemd[1]: cri-containerd-9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d.scope: Deactivated successfully. Apr 12 18:32:14.275748 env[1340]: time="2024-04-12T18:32:14.275711637Z" level=info msg="StartContainer for \"9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d\" returns successfully" Apr 12 18:32:14.968918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d-rootfs.mount: Deactivated successfully. Apr 12 18:32:15.117387 env[1340]: time="2024-04-12T18:32:15.117335100Z" level=info msg="shim disconnected" id=9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d Apr 12 18:32:15.117387 env[1340]: time="2024-04-12T18:32:15.117383299Z" level=warning msg="cleaning up after shim disconnected" id=9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d namespace=k8s.io Apr 12 18:32:15.117387 env[1340]: time="2024-04-12T18:32:15.117392859Z" level=info msg="cleaning up dead shim" Apr 12 18:32:15.123871 env[1340]: time="2024-04-12T18:32:15.123822569Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:32:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2991 runtime=io.containerd.runc.v2\n" Apr 12 18:32:15.957089 env[1340]: time="2024-04-12T18:32:15.952497648Z" level=info msg="CreateContainer within sandbox \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:32:16.071321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1619599643.mount: Deactivated successfully. Apr 12 18:32:16.076190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount6803220.mount: Deactivated successfully. Apr 12 18:32:16.167892 env[1340]: time="2024-04-12T18:32:16.167840465Z" level=info msg="CreateContainer within sandbox \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5\"" Apr 12 18:32:16.168641 env[1340]: time="2024-04-12T18:32:16.168610565Z" level=info msg="StartContainer for \"6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5\"" Apr 12 18:32:16.186530 systemd[1]: Started cri-containerd-6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5.scope. Apr 12 18:32:16.209695 systemd[1]: cri-containerd-6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5.scope: Deactivated successfully. Apr 12 18:32:16.212245 env[1340]: time="2024-04-12T18:32:16.211960396Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2722697_b602_41d9_8a60_d0b138d8039e.slice/cri-containerd-6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5.scope/memory.events\": no such file or directory" Apr 12 18:32:16.226627 env[1340]: time="2024-04-12T18:32:16.226552056Z" level=info msg="StartContainer for \"6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5\" returns successfully" Apr 12 18:32:17.062250 env[1340]: time="2024-04-12T18:32:17.062201902Z" level=info msg="shim disconnected" id=6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5 Apr 12 18:32:17.062250 env[1340]: time="2024-04-12T18:32:17.062248781Z" level=warning msg="cleaning up after shim disconnected" id=6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5 namespace=k8s.io Apr 12 18:32:17.062479 env[1340]: time="2024-04-12T18:32:17.062258381Z" level=info msg="cleaning up dead shim" Apr 12 18:32:17.068850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5-rootfs.mount: Deactivated successfully. Apr 12 18:32:17.071229 env[1340]: time="2024-04-12T18:32:17.071186270Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:32:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3045 runtime=io.containerd.runc.v2\n" Apr 12 18:32:17.961341 env[1340]: time="2024-04-12T18:32:17.961303585Z" level=info msg="CreateContainer within sandbox \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:32:18.122213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2221228225.mount: Deactivated successfully. Apr 12 18:32:18.125896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1753102812.mount: Deactivated successfully. Apr 12 18:32:18.212367 env[1340]: time="2024-04-12T18:32:18.212041048Z" level=info msg="CreateContainer within sandbox \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c\"" Apr 12 18:32:18.214361 env[1340]: time="2024-04-12T18:32:18.212901866Z" level=info msg="StartContainer for \"1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c\"" Apr 12 18:32:18.230286 systemd[1]: Started cri-containerd-1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c.scope. Apr 12 18:32:18.267199 env[1340]: time="2024-04-12T18:32:18.267147119Z" level=info msg="StartContainer for \"1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c\" returns successfully" Apr 12 18:32:18.373093 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Apr 12 18:32:18.439552 kubelet[2429]: I0412 18:32:18.438740 2429 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 12 18:32:18.464448 kubelet[2429]: I0412 18:32:18.464343 2429 topology_manager.go:215] "Topology Admit Handler" podUID="f8450732-e7cf-479d-a7ae-794028372cea" podNamespace="kube-system" podName="coredns-76f75df574-mhfxs" Apr 12 18:32:18.469781 systemd[1]: Created slice kubepods-burstable-podf8450732_e7cf_479d_a7ae_794028372cea.slice. Apr 12 18:32:18.471829 kubelet[2429]: I0412 18:32:18.471798 2429 topology_manager.go:215] "Topology Admit Handler" podUID="03e9150f-1018-4123-bcc8-fb7f8e4485d9" podNamespace="kube-system" podName="coredns-76f75df574-9fzx6" Apr 12 18:32:18.477762 systemd[1]: Created slice kubepods-burstable-pod03e9150f_1018_4123_bcc8_fb7f8e4485d9.slice. Apr 12 18:32:18.495041 kubelet[2429]: I0412 18:32:18.495016 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpdnv\" (UniqueName: \"kubernetes.io/projected/03e9150f-1018-4123-bcc8-fb7f8e4485d9-kube-api-access-dpdnv\") pod \"coredns-76f75df574-9fzx6\" (UID: \"03e9150f-1018-4123-bcc8-fb7f8e4485d9\") " pod="kube-system/coredns-76f75df574-9fzx6" Apr 12 18:32:18.495257 kubelet[2429]: I0412 18:32:18.495245 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsqws\" (UniqueName: \"kubernetes.io/projected/f8450732-e7cf-479d-a7ae-794028372cea-kube-api-access-lsqws\") pod \"coredns-76f75df574-mhfxs\" (UID: \"f8450732-e7cf-479d-a7ae-794028372cea\") " pod="kube-system/coredns-76f75df574-mhfxs" Apr 12 18:32:18.495359 kubelet[2429]: I0412 18:32:18.495347 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03e9150f-1018-4123-bcc8-fb7f8e4485d9-config-volume\") pod \"coredns-76f75df574-9fzx6\" (UID: \"03e9150f-1018-4123-bcc8-fb7f8e4485d9\") " pod="kube-system/coredns-76f75df574-9fzx6" Apr 12 18:32:18.495459 kubelet[2429]: I0412 18:32:18.495448 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8450732-e7cf-479d-a7ae-794028372cea-config-volume\") pod \"coredns-76f75df574-mhfxs\" (UID: \"f8450732-e7cf-479d-a7ae-794028372cea\") " pod="kube-system/coredns-76f75df574-mhfxs" Apr 12 18:32:18.774856 env[1340]: time="2024-04-12T18:32:18.774745707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mhfxs,Uid:f8450732-e7cf-479d-a7ae-794028372cea,Namespace:kube-system,Attempt:0,}" Apr 12 18:32:18.781167 env[1340]: time="2024-04-12T18:32:18.781100504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9fzx6,Uid:03e9150f-1018-4123-bcc8-fb7f8e4485d9,Namespace:kube-system,Attempt:0,}" Apr 12 18:32:18.974893 kubelet[2429]: I0412 18:32:18.974853 2429 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6p4rb" podStartSLOduration=10.904846975 podStartE2EDuration="28.974811434s" podCreationTimestamp="2024-04-12 18:31:50 +0000 UTC" firstStartedPulling="2024-04-12 18:31:52.500019132 +0000 UTC m=+14.826450188" lastFinishedPulling="2024-04-12 18:32:10.569983631 +0000 UTC m=+32.896414647" observedRunningTime="2024-04-12 18:32:18.972887323 +0000 UTC m=+41.299318379" watchObservedRunningTime="2024-04-12 18:32:18.974811434 +0000 UTC m=+41.301242490" Apr 12 18:32:19.024084 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Apr 12 18:32:20.656784 systemd-networkd[1486]: cilium_host: Link UP Apr 12 18:32:20.660172 systemd-networkd[1486]: cilium_net: Link UP Apr 12 18:32:20.663779 systemd-networkd[1486]: cilium_net: Gained carrier Apr 12 18:32:20.668745 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Apr 12 18:32:20.668854 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Apr 12 18:32:20.673654 systemd-networkd[1486]: cilium_host: Gained carrier Apr 12 18:32:20.839529 systemd-networkd[1486]: cilium_vxlan: Link UP Apr 12 18:32:20.839536 systemd-networkd[1486]: cilium_vxlan: Gained carrier Apr 12 18:32:21.038187 systemd-networkd[1486]: cilium_net: Gained IPv6LL Apr 12 18:32:21.099090 kernel: NET: Registered PF_ALG protocol family Apr 12 18:32:21.597213 systemd-networkd[1486]: cilium_host: Gained IPv6LL Apr 12 18:32:21.899130 systemd-networkd[1486]: lxc_health: Link UP Apr 12 18:32:21.908357 systemd-networkd[1486]: lxc_health: Gained carrier Apr 12 18:32:21.909105 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:32:22.045205 systemd-networkd[1486]: cilium_vxlan: Gained IPv6LL Apr 12 18:32:22.342682 systemd-networkd[1486]: lxc72a37dcb6456: Link UP Apr 12 18:32:22.350093 kernel: eth0: renamed from tmpcaa39 Apr 12 18:32:22.360144 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc72a37dcb6456: link becomes ready Apr 12 18:32:22.360015 systemd-networkd[1486]: lxc72a37dcb6456: Gained carrier Apr 12 18:32:22.437982 systemd-networkd[1486]: lxc5a427b9c3ac0: Link UP Apr 12 18:32:22.460164 kernel: eth0: renamed from tmp3b5f9 Apr 12 18:32:22.470936 systemd-networkd[1486]: lxc5a427b9c3ac0: Gained carrier Apr 12 18:32:22.471137 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5a427b9c3ac0: link becomes ready Apr 12 18:32:23.262216 systemd-networkd[1486]: lxc_health: Gained IPv6LL Apr 12 18:32:23.838182 systemd-networkd[1486]: lxc72a37dcb6456: Gained IPv6LL Apr 12 18:32:24.478235 systemd-networkd[1486]: lxc5a427b9c3ac0: Gained IPv6LL Apr 12 18:32:26.046133 env[1340]: time="2024-04-12T18:32:26.046046262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:32:26.046133 env[1340]: time="2024-04-12T18:32:26.046100221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:32:26.046514 env[1340]: time="2024-04-12T18:32:26.046130860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:32:26.047052 env[1340]: time="2024-04-12T18:32:26.046678247Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/caa398f501c31b1c5ac284e8e1792cac81ae0e0246e01b09de97e8a9e58677df pid=3595 runtime=io.containerd.runc.v2 Apr 12 18:32:26.064384 systemd[1]: run-containerd-runc-k8s.io-caa398f501c31b1c5ac284e8e1792cac81ae0e0246e01b09de97e8a9e58677df-runc.2WFyNR.mount: Deactivated successfully. Apr 12 18:32:26.067754 systemd[1]: Started cri-containerd-caa398f501c31b1c5ac284e8e1792cac81ae0e0246e01b09de97e8a9e58677df.scope. Apr 12 18:32:26.077135 env[1340]: time="2024-04-12T18:32:26.076959485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:32:26.077135 env[1340]: time="2024-04-12T18:32:26.077004724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:32:26.077135 env[1340]: time="2024-04-12T18:32:26.077027603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:32:26.083145 env[1340]: time="2024-04-12T18:32:26.083077499Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b5f91c0fcb973c056179c958c2bdb7e7cd45b0315bb3ea915059c062b9711da pid=3623 runtime=io.containerd.runc.v2 Apr 12 18:32:26.113733 systemd[1]: Started cri-containerd-3b5f91c0fcb973c056179c958c2bdb7e7cd45b0315bb3ea915059c062b9711da.scope. Apr 12 18:32:26.139469 env[1340]: time="2024-04-12T18:32:26.139407914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mhfxs,Uid:f8450732-e7cf-479d-a7ae-794028372cea,Namespace:kube-system,Attempt:0,} returns sandbox id \"caa398f501c31b1c5ac284e8e1792cac81ae0e0246e01b09de97e8a9e58677df\"" Apr 12 18:32:26.145514 env[1340]: time="2024-04-12T18:32:26.145467249Z" level=info msg="CreateContainer within sandbox \"caa398f501c31b1c5ac284e8e1792cac81ae0e0246e01b09de97e8a9e58677df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:32:26.164347 env[1340]: time="2024-04-12T18:32:26.164299240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9fzx6,Uid:03e9150f-1018-4123-bcc8-fb7f8e4485d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b5f91c0fcb973c056179c958c2bdb7e7cd45b0315bb3ea915059c062b9711da\"" Apr 12 18:32:26.169712 env[1340]: time="2024-04-12T18:32:26.169668712Z" level=info msg="CreateContainer within sandbox \"3b5f91c0fcb973c056179c958c2bdb7e7cd45b0315bb3ea915059c062b9711da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:32:26.564089 env[1340]: time="2024-04-12T18:32:26.564014979Z" level=info msg="CreateContainer within sandbox \"caa398f501c31b1c5ac284e8e1792cac81ae0e0246e01b09de97e8a9e58677df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3401c9330166d5d514255f4493c4e8f9657a326353584927712e2816595dff1f\"" Apr 12 18:32:26.564969 env[1340]: time="2024-04-12T18:32:26.564943797Z" level=info msg="StartContainer for \"3401c9330166d5d514255f4493c4e8f9657a326353584927712e2816595dff1f\"" Apr 12 18:32:26.578809 systemd[1]: Started cri-containerd-3401c9330166d5d514255f4493c4e8f9657a326353584927712e2816595dff1f.scope. Apr 12 18:32:26.662980 env[1340]: time="2024-04-12T18:32:26.662919379Z" level=info msg="StartContainer for \"3401c9330166d5d514255f4493c4e8f9657a326353584927712e2816595dff1f\" returns successfully" Apr 12 18:32:26.727159 env[1340]: time="2024-04-12T18:32:26.727103967Z" level=info msg="CreateContainer within sandbox \"3b5f91c0fcb973c056179c958c2bdb7e7cd45b0315bb3ea915059c062b9711da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"88f432ab3c4f59be806f014227a32176715d734a85e9aec49935e3917e7d2314\"" Apr 12 18:32:26.728100 env[1340]: time="2024-04-12T18:32:26.728056024Z" level=info msg="StartContainer for \"88f432ab3c4f59be806f014227a32176715d734a85e9aec49935e3917e7d2314\"" Apr 12 18:32:26.746645 systemd[1]: Started cri-containerd-88f432ab3c4f59be806f014227a32176715d734a85e9aec49935e3917e7d2314.scope. Apr 12 18:32:26.780634 env[1340]: time="2024-04-12T18:32:26.780578650Z" level=info msg="StartContainer for \"88f432ab3c4f59be806f014227a32176715d734a85e9aec49935e3917e7d2314\" returns successfully" Apr 12 18:32:26.984402 kubelet[2429]: I0412 18:32:26.984367 2429 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-mhfxs" podStartSLOduration=36.984328427 podStartE2EDuration="36.984328427s" podCreationTimestamp="2024-04-12 18:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:32:26.983767721 +0000 UTC m=+49.310198777" watchObservedRunningTime="2024-04-12 18:32:26.984328427 +0000 UTC m=+49.310759483" Apr 12 18:34:33.406025 systemd[1]: Started sshd@5-10.200.20.12:22-10.200.12.6:52002.service. Apr 12 18:34:33.803836 sshd[3771]: Accepted publickey for core from 10.200.12.6 port 52002 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:33.805583 sshd[3771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:33.809915 systemd[1]: Started session-8.scope. Apr 12 18:34:33.810239 systemd-logind[1326]: New session 8 of user core. Apr 12 18:34:34.639279 sshd[3771]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:34.642285 systemd[1]: sshd@5-10.200.20.12:22-10.200.12.6:52002.service: Deactivated successfully. Apr 12 18:34:34.643379 systemd[1]: session-8.scope: Deactivated successfully. Apr 12 18:34:34.644119 systemd-logind[1326]: Session 8 logged out. Waiting for processes to exit. Apr 12 18:34:34.645194 systemd-logind[1326]: Removed session 8. Apr 12 18:34:39.707754 systemd[1]: Started sshd@6-10.200.20.12:22-10.200.12.6:44316.service. Apr 12 18:34:40.107015 sshd[3787]: Accepted publickey for core from 10.200.12.6 port 44316 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:40.108182 sshd[3787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:40.112701 systemd[1]: Started session-9.scope. Apr 12 18:34:40.113889 systemd-logind[1326]: New session 9 of user core. Apr 12 18:34:40.458288 sshd[3787]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:40.460863 systemd[1]: sshd@6-10.200.20.12:22-10.200.12.6:44316.service: Deactivated successfully. Apr 12 18:34:40.461620 systemd[1]: session-9.scope: Deactivated successfully. Apr 12 18:34:40.462164 systemd-logind[1326]: Session 9 logged out. Waiting for processes to exit. Apr 12 18:34:40.462834 systemd-logind[1326]: Removed session 9. Apr 12 18:34:45.526383 systemd[1]: Started sshd@7-10.200.20.12:22-10.200.12.6:58214.service. Apr 12 18:34:45.931510 sshd[3802]: Accepted publickey for core from 10.200.12.6 port 58214 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:45.933217 sshd[3802]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:45.937340 systemd-logind[1326]: New session 10 of user core. Apr 12 18:34:45.937843 systemd[1]: Started session-10.scope. Apr 12 18:34:46.293821 sshd[3802]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:46.296594 systemd-logind[1326]: Session 10 logged out. Waiting for processes to exit. Apr 12 18:34:46.296858 systemd[1]: sshd@7-10.200.20.12:22-10.200.12.6:58214.service: Deactivated successfully. Apr 12 18:34:46.297579 systemd[1]: session-10.scope: Deactivated successfully. Apr 12 18:34:46.298411 systemd-logind[1326]: Removed session 10. Apr 12 18:34:51.362415 systemd[1]: Started sshd@8-10.200.20.12:22-10.200.12.6:58216.service. Apr 12 18:34:51.763432 sshd[3814]: Accepted publickey for core from 10.200.12.6 port 58216 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:51.765137 sshd[3814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:51.769054 systemd-logind[1326]: New session 11 of user core. Apr 12 18:34:51.769573 systemd[1]: Started session-11.scope. Apr 12 18:34:52.117294 sshd[3814]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:52.120290 systemd[1]: sshd@8-10.200.20.12:22-10.200.12.6:58216.service: Deactivated successfully. Apr 12 18:34:52.121043 systemd[1]: session-11.scope: Deactivated successfully. Apr 12 18:34:52.121667 systemd-logind[1326]: Session 11 logged out. Waiting for processes to exit. Apr 12 18:34:52.122589 systemd-logind[1326]: Removed session 11. Apr 12 18:34:52.186189 systemd[1]: Started sshd@9-10.200.20.12:22-10.200.12.6:58218.service. Apr 12 18:34:52.591919 sshd[3827]: Accepted publickey for core from 10.200.12.6 port 58218 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:52.593298 sshd[3827]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:52.597718 systemd[1]: Started session-12.scope. Apr 12 18:34:52.599133 systemd-logind[1326]: New session 12 of user core. Apr 12 18:34:52.996927 sshd[3827]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:52.999461 systemd[1]: sshd@9-10.200.20.12:22-10.200.12.6:58218.service: Deactivated successfully. Apr 12 18:34:53.000194 systemd[1]: session-12.scope: Deactivated successfully. Apr 12 18:34:53.000742 systemd-logind[1326]: Session 12 logged out. Waiting for processes to exit. Apr 12 18:34:53.001756 systemd-logind[1326]: Removed session 12. Apr 12 18:34:53.064557 systemd[1]: Started sshd@10-10.200.20.12:22-10.200.12.6:58220.service. Apr 12 18:34:53.464588 sshd[3839]: Accepted publickey for core from 10.200.12.6 port 58220 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:53.465850 sshd[3839]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:53.470165 systemd-logind[1326]: New session 13 of user core. Apr 12 18:34:53.470385 systemd[1]: Started session-13.scope. Apr 12 18:34:53.832237 sshd[3839]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:53.835518 systemd-logind[1326]: Session 13 logged out. Waiting for processes to exit. Apr 12 18:34:53.836792 systemd[1]: session-13.scope: Deactivated successfully. Apr 12 18:34:53.838239 systemd-logind[1326]: Removed session 13. Apr 12 18:34:53.838928 systemd[1]: sshd@10-10.200.20.12:22-10.200.12.6:58220.service: Deactivated successfully. Apr 12 18:34:58.899368 systemd[1]: Started sshd@11-10.200.20.12:22-10.200.12.6:37064.service. Apr 12 18:34:59.295605 sshd[3850]: Accepted publickey for core from 10.200.12.6 port 37064 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:34:59.297244 sshd[3850]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:34:59.301457 systemd[1]: Started session-14.scope. Apr 12 18:34:59.301925 systemd-logind[1326]: New session 14 of user core. Apr 12 18:34:59.654302 sshd[3850]: pam_unix(sshd:session): session closed for user core Apr 12 18:34:59.657521 systemd[1]: sshd@11-10.200.20.12:22-10.200.12.6:37064.service: Deactivated successfully. Apr 12 18:34:59.657711 systemd-logind[1326]: Session 14 logged out. Waiting for processes to exit. Apr 12 18:34:59.658265 systemd[1]: session-14.scope: Deactivated successfully. Apr 12 18:34:59.658989 systemd-logind[1326]: Removed session 14. Apr 12 18:35:04.726605 systemd[1]: Started sshd@12-10.200.20.12:22-10.200.12.6:37080.service. Apr 12 18:35:05.156700 sshd[3861]: Accepted publickey for core from 10.200.12.6 port 37080 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:35:05.158283 sshd[3861]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:35:05.162518 systemd[1]: Started session-15.scope. Apr 12 18:35:05.162813 systemd-logind[1326]: New session 15 of user core. Apr 12 18:35:05.533673 sshd[3861]: pam_unix(sshd:session): session closed for user core Apr 12 18:35:05.536193 systemd[1]: sshd@12-10.200.20.12:22-10.200.12.6:37080.service: Deactivated successfully. Apr 12 18:35:05.536928 systemd[1]: session-15.scope: Deactivated successfully. Apr 12 18:35:05.537456 systemd-logind[1326]: Session 15 logged out. Waiting for processes to exit. Apr 12 18:35:05.538165 systemd-logind[1326]: Removed session 15. Apr 12 18:35:05.606340 systemd[1]: Started sshd@13-10.200.20.12:22-10.200.12.6:41814.service. Apr 12 18:35:06.012160 sshd[3872]: Accepted publickey for core from 10.200.12.6 port 41814 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:35:06.013747 sshd[3872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:35:06.017987 systemd[1]: Started session-16.scope. Apr 12 18:35:06.018910 systemd-logind[1326]: New session 16 of user core. Apr 12 18:35:06.395374 sshd[3872]: pam_unix(sshd:session): session closed for user core Apr 12 18:35:06.398173 systemd[1]: sshd@13-10.200.20.12:22-10.200.12.6:41814.service: Deactivated successfully. Apr 12 18:35:06.398897 systemd[1]: session-16.scope: Deactivated successfully. Apr 12 18:35:06.399435 systemd-logind[1326]: Session 16 logged out. Waiting for processes to exit. Apr 12 18:35:06.400129 systemd-logind[1326]: Removed session 16. Apr 12 18:35:06.463828 systemd[1]: Started sshd@14-10.200.20.12:22-10.200.12.6:41816.service. Apr 12 18:35:06.862977 sshd[3885]: Accepted publickey for core from 10.200.12.6 port 41816 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:35:06.864547 sshd[3885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:35:06.868799 systemd[1]: Started session-17.scope. Apr 12 18:35:06.869242 systemd-logind[1326]: New session 17 of user core. Apr 12 18:35:08.392165 sshd[3885]: pam_unix(sshd:session): session closed for user core Apr 12 18:35:08.395494 systemd[1]: sshd@14-10.200.20.12:22-10.200.12.6:41816.service: Deactivated successfully. Apr 12 18:35:08.396229 systemd[1]: session-17.scope: Deactivated successfully. Apr 12 18:35:08.396514 systemd-logind[1326]: Session 17 logged out. Waiting for processes to exit. Apr 12 18:35:08.397576 systemd-logind[1326]: Removed session 17. Apr 12 18:35:08.460106 systemd[1]: Started sshd@15-10.200.20.12:22-10.200.12.6:41824.service. Apr 12 18:35:08.859418 sshd[3903]: Accepted publickey for core from 10.200.12.6 port 41824 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:35:08.860647 sshd[3903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:35:08.865185 systemd[1]: Started session-18.scope. Apr 12 18:35:08.865613 systemd-logind[1326]: New session 18 of user core. Apr 12 18:35:09.333360 sshd[3903]: pam_unix(sshd:session): session closed for user core Apr 12 18:35:09.335843 systemd[1]: sshd@15-10.200.20.12:22-10.200.12.6:41824.service: Deactivated successfully. Apr 12 18:35:09.336551 systemd[1]: session-18.scope: Deactivated successfully. Apr 12 18:35:09.337723 systemd-logind[1326]: Session 18 logged out. Waiting for processes to exit. Apr 12 18:35:09.338614 systemd-logind[1326]: Removed session 18. Apr 12 18:35:09.401254 systemd[1]: Started sshd@16-10.200.20.12:22-10.200.12.6:41838.service. Apr 12 18:35:09.806458 sshd[3912]: Accepted publickey for core from 10.200.12.6 port 41838 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:35:09.808056 sshd[3912]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:35:09.812325 systemd[1]: Started session-19.scope. Apr 12 18:35:09.812753 systemd-logind[1326]: New session 19 of user core. Apr 12 18:35:10.171487 sshd[3912]: pam_unix(sshd:session): session closed for user core Apr 12 18:35:10.174814 systemd[1]: sshd@16-10.200.20.12:22-10.200.12.6:41838.service: Deactivated successfully. Apr 12 18:35:10.175016 systemd-logind[1326]: Session 19 logged out. Waiting for processes to exit. Apr 12 18:35:10.175563 systemd[1]: session-19.scope: Deactivated successfully. Apr 12 18:35:10.176327 systemd-logind[1326]: Removed session 19. Apr 12 18:35:15.252942 systemd[1]: Started sshd@17-10.200.20.12:22-10.200.12.6:34536.service. Apr 12 18:35:15.684272 sshd[3926]: Accepted publickey for core from 10.200.12.6 port 34536 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:35:15.685907 sshd[3926]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:35:15.689976 systemd[1]: Started session-20.scope. Apr 12 18:35:15.690152 systemd-logind[1326]: New session 20 of user core. Apr 12 18:35:16.061194 sshd[3926]: pam_unix(sshd:session): session closed for user core Apr 12 18:35:16.063738 systemd[1]: sshd@17-10.200.20.12:22-10.200.12.6:34536.service: Deactivated successfully. Apr 12 18:35:16.064463 systemd[1]: session-20.scope: Deactivated successfully. Apr 12 18:35:16.064991 systemd-logind[1326]: Session 20 logged out. Waiting for processes to exit. Apr 12 18:35:16.066460 systemd-logind[1326]: Removed session 20. Apr 12 18:35:21.135052 systemd[1]: Started sshd@18-10.200.20.12:22-10.200.12.6:34548.service. Apr 12 18:35:21.566391 sshd[3937]: Accepted publickey for core from 10.200.12.6 port 34548 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:35:21.568118 sshd[3937]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:35:21.572401 systemd[1]: Started session-21.scope. Apr 12 18:35:21.572691 systemd-logind[1326]: New session 21 of user core. Apr 12 18:35:21.944275 sshd[3937]: pam_unix(sshd:session): session closed for user core Apr 12 18:35:21.947654 systemd-logind[1326]: Session 21 logged out. Waiting for processes to exit. Apr 12 18:35:21.948891 systemd[1]: session-21.scope: Deactivated successfully. Apr 12 18:35:21.949655 systemd[1]: sshd@18-10.200.20.12:22-10.200.12.6:34548.service: Deactivated successfully. Apr 12 18:35:21.950735 systemd-logind[1326]: Removed session 21. Apr 12 18:35:27.013310 systemd[1]: Started sshd@19-10.200.20.12:22-10.200.12.6:53148.service. Apr 12 18:35:27.417862 sshd[3950]: Accepted publickey for core from 10.200.12.6 port 53148 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:35:27.419497 sshd[3950]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:35:27.423905 systemd[1]: Started session-22.scope. Apr 12 18:35:27.424497 systemd-logind[1326]: New session 22 of user core. Apr 12 18:35:27.770936 sshd[3950]: pam_unix(sshd:session): session closed for user core Apr 12 18:35:27.773510 systemd-logind[1326]: Session 22 logged out. Waiting for processes to exit. Apr 12 18:35:27.773711 systemd[1]: sshd@19-10.200.20.12:22-10.200.12.6:53148.service: Deactivated successfully. Apr 12 18:35:27.774457 systemd[1]: session-22.scope: Deactivated successfully. Apr 12 18:35:27.775163 systemd-logind[1326]: Removed session 22. Apr 12 18:35:27.838276 systemd[1]: Started sshd@20-10.200.20.12:22-10.200.12.6:53150.service. Apr 12 18:35:28.238500 sshd[3962]: Accepted publickey for core from 10.200.12.6 port 53150 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:35:28.240184 sshd[3962]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:35:28.244394 systemd[1]: Started session-23.scope. Apr 12 18:35:28.245723 systemd-logind[1326]: New session 23 of user core. Apr 12 18:35:30.585933 kubelet[2429]: I0412 18:35:30.585873 2429 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9fzx6" podStartSLOduration=220.585819039 podStartE2EDuration="3m40.585819039s" podCreationTimestamp="2024-04-12 18:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:32:27.011453502 +0000 UTC m=+49.337884558" watchObservedRunningTime="2024-04-12 18:35:30.585819039 +0000 UTC m=+232.912250095" Apr 12 18:35:30.612020 env[1340]: time="2024-04-12T18:35:30.611953705Z" level=info msg="StopContainer for \"373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd\" with timeout 30 (s)" Apr 12 18:35:30.612577 env[1340]: time="2024-04-12T18:35:30.612543051Z" level=info msg="Stop container \"373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd\" with signal terminated" Apr 12 18:35:30.623564 env[1340]: time="2024-04-12T18:35:30.623500393Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:35:30.629525 systemd[1]: cri-containerd-373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd.scope: Deactivated successfully. Apr 12 18:35:30.632053 env[1340]: time="2024-04-12T18:35:30.632003954Z" level=info msg="StopContainer for \"1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c\" with timeout 2 (s)" Apr 12 18:35:30.632605 env[1340]: time="2024-04-12T18:35:30.632544501Z" level=info msg="Stop container \"1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c\" with signal terminated" Apr 12 18:35:30.643506 systemd-networkd[1486]: lxc_health: Link DOWN Apr 12 18:35:30.643517 systemd-networkd[1486]: lxc_health: Lost carrier Apr 12 18:35:30.653224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd-rootfs.mount: Deactivated successfully. Apr 12 18:35:30.667858 systemd[1]: cri-containerd-1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c.scope: Deactivated successfully. Apr 12 18:35:30.668220 systemd[1]: cri-containerd-1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c.scope: Consumed 6.490s CPU time. Apr 12 18:35:30.688265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c-rootfs.mount: Deactivated successfully. Apr 12 18:35:30.747500 env[1340]: time="2024-04-12T18:35:30.747172766Z" level=info msg="shim disconnected" id=373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd Apr 12 18:35:30.747500 env[1340]: time="2024-04-12T18:35:30.747235005Z" level=warning msg="cleaning up after shim disconnected" id=373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd namespace=k8s.io Apr 12 18:35:30.747500 env[1340]: time="2024-04-12T18:35:30.747244564Z" level=info msg="cleaning up dead shim" Apr 12 18:35:30.749846 env[1340]: time="2024-04-12T18:35:30.748655451Z" level=info msg="shim disconnected" id=1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c Apr 12 18:35:30.749846 env[1340]: time="2024-04-12T18:35:30.748736769Z" level=warning msg="cleaning up after shim disconnected" id=1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c namespace=k8s.io Apr 12 18:35:30.749846 env[1340]: time="2024-04-12T18:35:30.748746529Z" level=info msg="cleaning up dead shim" Apr 12 18:35:30.759043 env[1340]: time="2024-04-12T18:35:30.758956129Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:35:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4028 runtime=io.containerd.runc.v2\n" Apr 12 18:35:30.762819 env[1340]: time="2024-04-12T18:35:30.762768799Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:35:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4029 runtime=io.containerd.runc.v2\n" Apr 12 18:35:30.770056 env[1340]: time="2024-04-12T18:35:30.770011589Z" level=info msg="StopContainer for \"373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd\" returns successfully" Apr 12 18:35:30.770797 env[1340]: time="2024-04-12T18:35:30.770771331Z" level=info msg="StopPodSandbox for \"9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d\"" Apr 12 18:35:30.770977 env[1340]: time="2024-04-12T18:35:30.770933287Z" level=info msg="Container to stop \"373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:35:30.771226 env[1340]: time="2024-04-12T18:35:30.771133443Z" level=info msg="StopContainer for \"1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c\" returns successfully" Apr 12 18:35:30.772994 env[1340]: time="2024-04-12T18:35:30.771735589Z" level=info msg="StopPodSandbox for \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\"" Apr 12 18:35:30.772994 env[1340]: time="2024-04-12T18:35:30.771841426Z" level=info msg="Container to stop \"0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:35:30.772994 env[1340]: time="2024-04-12T18:35:30.772087060Z" level=info msg="Container to stop \"8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:35:30.772994 env[1340]: time="2024-04-12T18:35:30.772104140Z" level=info msg="Container to stop \"9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:35:30.772994 env[1340]: time="2024-04-12T18:35:30.772121660Z" level=info msg="Container to stop \"6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:35:30.772994 env[1340]: time="2024-04-12T18:35:30.772136539Z" level=info msg="Container to stop \"1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:35:30.774563 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d-shm.mount: Deactivated successfully. Apr 12 18:35:30.778891 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0-shm.mount: Deactivated successfully. Apr 12 18:35:30.785140 systemd[1]: cri-containerd-87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0.scope: Deactivated successfully. Apr 12 18:35:30.785923 systemd[1]: cri-containerd-9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d.scope: Deactivated successfully. Apr 12 18:35:30.823580 env[1340]: time="2024-04-12T18:35:30.823513411Z" level=info msg="shim disconnected" id=9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d Apr 12 18:35:30.824057 env[1340]: time="2024-04-12T18:35:30.824035959Z" level=warning msg="cleaning up after shim disconnected" id=9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d namespace=k8s.io Apr 12 18:35:30.824171 env[1340]: time="2024-04-12T18:35:30.824155356Z" level=info msg="cleaning up dead shim" Apr 12 18:35:30.824441 env[1340]: time="2024-04-12T18:35:30.823971881Z" level=info msg="shim disconnected" id=87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0 Apr 12 18:35:30.824840 env[1340]: time="2024-04-12T18:35:30.824557987Z" level=warning msg="cleaning up after shim disconnected" id=87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0 namespace=k8s.io Apr 12 18:35:30.824948 env[1340]: time="2024-04-12T18:35:30.824923418Z" level=info msg="cleaning up dead shim" Apr 12 18:35:30.834920 env[1340]: time="2024-04-12T18:35:30.834848265Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:35:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4090 runtime=io.containerd.runc.v2\n" Apr 12 18:35:30.835332 env[1340]: time="2024-04-12T18:35:30.835296374Z" level=info msg="TearDown network for sandbox \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\" successfully" Apr 12 18:35:30.835399 env[1340]: time="2024-04-12T18:35:30.835333134Z" level=info msg="StopPodSandbox for \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\" returns successfully" Apr 12 18:35:30.841094 env[1340]: time="2024-04-12T18:35:30.839360279Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:35:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4089 runtime=io.containerd.runc.v2\n" Apr 12 18:35:30.842316 env[1340]: time="2024-04-12T18:35:30.842223612Z" level=info msg="TearDown network for sandbox \"9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d\" successfully" Apr 12 18:35:30.842316 env[1340]: time="2024-04-12T18:35:30.842292690Z" level=info msg="StopPodSandbox for \"9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d\" returns successfully" Apr 12 18:35:30.870696 kubelet[2429]: I0412 18:35:30.870584 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-cilium-cgroup\") pod \"c2722697-b602-41d9-8a60-d0b138d8039e\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " Apr 12 18:35:30.870907 kubelet[2429]: I0412 18:35:30.870892 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-cni-path\") pod \"c2722697-b602-41d9-8a60-d0b138d8039e\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " Apr 12 18:35:30.871039 kubelet[2429]: I0412 18:35:30.871028 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2722697-b602-41d9-8a60-d0b138d8039e-cilium-config-path\") pod \"c2722697-b602-41d9-8a60-d0b138d8039e\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " Apr 12 18:35:30.871176 kubelet[2429]: I0412 18:35:30.871165 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2722697-b602-41d9-8a60-d0b138d8039e-hubble-tls\") pod \"c2722697-b602-41d9-8a60-d0b138d8039e\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " Apr 12 18:35:30.871264 kubelet[2429]: I0412 18:35:30.871254 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-host-proc-sys-net\") pod \"c2722697-b602-41d9-8a60-d0b138d8039e\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " Apr 12 18:35:30.871356 kubelet[2429]: I0412 18:35:30.871346 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bqww\" (UniqueName: \"kubernetes.io/projected/d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5-kube-api-access-2bqww\") pod \"d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5\" (UID: \"d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5\") " Apr 12 18:35:30.871525 kubelet[2429]: I0412 18:35:30.871515 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-host-proc-sys-kernel\") pod \"c2722697-b602-41d9-8a60-d0b138d8039e\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " Apr 12 18:35:30.871614 kubelet[2429]: I0412 18:35:30.871605 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-etc-cni-netd\") pod \"c2722697-b602-41d9-8a60-d0b138d8039e\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " Apr 12 18:35:30.871700 kubelet[2429]: I0412 18:35:30.871690 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjx9m\" (UniqueName: \"kubernetes.io/projected/c2722697-b602-41d9-8a60-d0b138d8039e-kube-api-access-bjx9m\") pod \"c2722697-b602-41d9-8a60-d0b138d8039e\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " Apr 12 18:35:30.871794 kubelet[2429]: I0412 18:35:30.871784 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2722697-b602-41d9-8a60-d0b138d8039e-clustermesh-secrets\") pod \"c2722697-b602-41d9-8a60-d0b138d8039e\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " Apr 12 18:35:30.871880 kubelet[2429]: I0412 18:35:30.871870 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-hostproc\") pod \"c2722697-b602-41d9-8a60-d0b138d8039e\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " Apr 12 18:35:30.871969 kubelet[2429]: I0412 18:35:30.871959 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-lib-modules\") pod \"c2722697-b602-41d9-8a60-d0b138d8039e\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " Apr 12 18:35:30.872056 kubelet[2429]: I0412 18:35:30.872046 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-cilium-run\") pod \"c2722697-b602-41d9-8a60-d0b138d8039e\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " Apr 12 18:35:30.872151 kubelet[2429]: I0412 18:35:30.872141 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-bpf-maps\") pod \"c2722697-b602-41d9-8a60-d0b138d8039e\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " Apr 12 18:35:30.872240 kubelet[2429]: I0412 18:35:30.872230 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-xtables-lock\") pod \"c2722697-b602-41d9-8a60-d0b138d8039e\" (UID: \"c2722697-b602-41d9-8a60-d0b138d8039e\") " Apr 12 18:35:30.872329 kubelet[2429]: I0412 18:35:30.872319 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5-cilium-config-path\") pod \"d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5\" (UID: \"d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5\") " Apr 12 18:35:30.873152 kubelet[2429]: I0412 18:35:30.873123 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c2722697-b602-41d9-8a60-d0b138d8039e" (UID: "c2722697-b602-41d9-8a60-d0b138d8039e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:30.873220 kubelet[2429]: I0412 18:35:30.870989 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-cni-path" (OuterVolumeSpecName: "cni-path") pod "c2722697-b602-41d9-8a60-d0b138d8039e" (UID: "c2722697-b602-41d9-8a60-d0b138d8039e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:30.873220 kubelet[2429]: I0412 18:35:30.870660 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c2722697-b602-41d9-8a60-d0b138d8039e" (UID: "c2722697-b602-41d9-8a60-d0b138d8039e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:30.875158 kubelet[2429]: I0412 18:35:30.875123 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5" (UID: "d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:35:30.877182 kubelet[2429]: I0412 18:35:30.875308 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2722697-b602-41d9-8a60-d0b138d8039e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c2722697-b602-41d9-8a60-d0b138d8039e" (UID: "c2722697-b602-41d9-8a60-d0b138d8039e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:35:30.877300 kubelet[2429]: I0412 18:35:30.875330 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c2722697-b602-41d9-8a60-d0b138d8039e" (UID: "c2722697-b602-41d9-8a60-d0b138d8039e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:30.877369 kubelet[2429]: I0412 18:35:30.875483 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c2722697-b602-41d9-8a60-d0b138d8039e" (UID: "c2722697-b602-41d9-8a60-d0b138d8039e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:30.881163 kubelet[2429]: I0412 18:35:30.881136 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c2722697-b602-41d9-8a60-d0b138d8039e" (UID: "c2722697-b602-41d9-8a60-d0b138d8039e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:30.881427 kubelet[2429]: I0412 18:35:30.881276 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c2722697-b602-41d9-8a60-d0b138d8039e" (UID: "c2722697-b602-41d9-8a60-d0b138d8039e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:30.881515 kubelet[2429]: I0412 18:35:30.881292 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c2722697-b602-41d9-8a60-d0b138d8039e" (UID: "c2722697-b602-41d9-8a60-d0b138d8039e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:30.882642 kubelet[2429]: I0412 18:35:30.881374 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-hostproc" (OuterVolumeSpecName: "hostproc") pod "c2722697-b602-41d9-8a60-d0b138d8039e" (UID: "c2722697-b602-41d9-8a60-d0b138d8039e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:30.882770 kubelet[2429]: I0412 18:35:30.881388 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c2722697-b602-41d9-8a60-d0b138d8039e" (UID: "c2722697-b602-41d9-8a60-d0b138d8039e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:30.882960 kubelet[2429]: I0412 18:35:30.882940 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2722697-b602-41d9-8a60-d0b138d8039e-kube-api-access-bjx9m" (OuterVolumeSpecName: "kube-api-access-bjx9m") pod "c2722697-b602-41d9-8a60-d0b138d8039e" (UID: "c2722697-b602-41d9-8a60-d0b138d8039e"). InnerVolumeSpecName "kube-api-access-bjx9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:35:30.884342 kubelet[2429]: I0412 18:35:30.884320 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2722697-b602-41d9-8a60-d0b138d8039e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c2722697-b602-41d9-8a60-d0b138d8039e" (UID: "c2722697-b602-41d9-8a60-d0b138d8039e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:35:30.885728 kubelet[2429]: I0412 18:35:30.885675 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2722697-b602-41d9-8a60-d0b138d8039e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c2722697-b602-41d9-8a60-d0b138d8039e" (UID: "c2722697-b602-41d9-8a60-d0b138d8039e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:35:30.885983 kubelet[2429]: I0412 18:35:30.885962 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5-kube-api-access-2bqww" (OuterVolumeSpecName: "kube-api-access-2bqww") pod "d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5" (UID: "d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5"). InnerVolumeSpecName "kube-api-access-2bqww". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:35:30.973273 kubelet[2429]: I0412 18:35:30.973242 2429 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5-cilium-config-path\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:30.973460 kubelet[2429]: I0412 18:35:30.973448 2429 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-cni-path\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:30.973559 kubelet[2429]: I0412 18:35:30.973526 2429 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-cilium-cgroup\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:30.973629 kubelet[2429]: I0412 18:35:30.973621 2429 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-host-proc-sys-kernel\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:30.973703 kubelet[2429]: I0412 18:35:30.973693 2429 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-etc-cni-netd\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:30.973770 kubelet[2429]: I0412 18:35:30.973761 2429 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bjx9m\" (UniqueName: \"kubernetes.io/projected/c2722697-b602-41d9-8a60-d0b138d8039e-kube-api-access-bjx9m\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:30.973851 kubelet[2429]: I0412 18:35:30.973840 2429 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2722697-b602-41d9-8a60-d0b138d8039e-cilium-config-path\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:30.973968 kubelet[2429]: I0412 18:35:30.973909 2429 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2722697-b602-41d9-8a60-d0b138d8039e-hubble-tls\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:30.974049 kubelet[2429]: I0412 18:35:30.974039 2429 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-host-proc-sys-net\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:30.974140 kubelet[2429]: I0412 18:35:30.974130 2429 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2bqww\" (UniqueName: \"kubernetes.io/projected/d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5-kube-api-access-2bqww\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:30.974207 kubelet[2429]: I0412 18:35:30.974198 2429 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2722697-b602-41d9-8a60-d0b138d8039e-clustermesh-secrets\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:30.974277 kubelet[2429]: I0412 18:35:30.974267 2429 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-hostproc\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:30.974345 kubelet[2429]: I0412 18:35:30.974336 2429 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-lib-modules\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:30.974407 kubelet[2429]: I0412 18:35:30.974399 2429 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-bpf-maps\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:30.974474 kubelet[2429]: I0412 18:35:30.974466 2429 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-cilium-run\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:30.974538 kubelet[2429]: I0412 18:35:30.974527 2429 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2722697-b602-41d9-8a60-d0b138d8039e-xtables-lock\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:31.304876 kubelet[2429]: I0412 18:35:31.304848 2429 scope.go:117] "RemoveContainer" containerID="1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c" Apr 12 18:35:31.308791 systemd[1]: Removed slice kubepods-burstable-podc2722697_b602_41d9_8a60_d0b138d8039e.slice. Apr 12 18:35:31.308877 systemd[1]: kubepods-burstable-podc2722697_b602_41d9_8a60_d0b138d8039e.slice: Consumed 6.579s CPU time. Apr 12 18:35:31.312780 env[1340]: time="2024-04-12T18:35:31.312263492Z" level=info msg="RemoveContainer for \"1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c\"" Apr 12 18:35:31.316756 systemd[1]: Removed slice kubepods-besteffort-podd01d15c0_24d1_4d66_8f36_2ff74cf8c3f5.slice. Apr 12 18:35:31.328395 env[1340]: time="2024-04-12T18:35:31.328262036Z" level=info msg="RemoveContainer for \"1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c\" returns successfully" Apr 12 18:35:31.328541 kubelet[2429]: I0412 18:35:31.328521 2429 scope.go:117] "RemoveContainer" containerID="6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5" Apr 12 18:35:31.330885 env[1340]: time="2024-04-12T18:35:31.330798177Z" level=info msg="RemoveContainer for \"6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5\"" Apr 12 18:35:31.342995 env[1340]: time="2024-04-12T18:35:31.342936332Z" level=info msg="RemoveContainer for \"6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5\" returns successfully" Apr 12 18:35:31.343435 kubelet[2429]: I0412 18:35:31.343315 2429 scope.go:117] "RemoveContainer" containerID="9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d" Apr 12 18:35:31.344457 env[1340]: time="2024-04-12T18:35:31.344417057Z" level=info msg="RemoveContainer for \"9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d\"" Apr 12 18:35:31.353865 env[1340]: time="2024-04-12T18:35:31.353815077Z" level=info msg="RemoveContainer for \"9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d\" returns successfully" Apr 12 18:35:31.354099 kubelet[2429]: I0412 18:35:31.354057 2429 scope.go:117] "RemoveContainer" containerID="8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f" Apr 12 18:35:31.355379 env[1340]: time="2024-04-12T18:35:31.355134766Z" level=info msg="RemoveContainer for \"8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f\"" Apr 12 18:35:31.365226 env[1340]: time="2024-04-12T18:35:31.365185450Z" level=info msg="RemoveContainer for \"8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f\" returns successfully" Apr 12 18:35:31.365592 kubelet[2429]: I0412 18:35:31.365561 2429 scope.go:117] "RemoveContainer" containerID="0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7" Apr 12 18:35:31.366655 env[1340]: time="2024-04-12T18:35:31.366626016Z" level=info msg="RemoveContainer for \"0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7\"" Apr 12 18:35:31.377958 env[1340]: time="2024-04-12T18:35:31.377913391Z" level=info msg="RemoveContainer for \"0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7\" returns successfully" Apr 12 18:35:31.378186 kubelet[2429]: I0412 18:35:31.378159 2429 scope.go:117] "RemoveContainer" containerID="1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c" Apr 12 18:35:31.378566 env[1340]: time="2024-04-12T18:35:31.378493857Z" level=error msg="ContainerStatus for \"1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c\": not found" Apr 12 18:35:31.378704 kubelet[2429]: E0412 18:35:31.378684 2429 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c\": not found" containerID="1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c" Apr 12 18:35:31.378805 kubelet[2429]: I0412 18:35:31.378788 2429 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c"} err="failed to get container status \"1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c\": rpc error: code = NotFound desc = an error occurred when try to find container \"1372634fd0eefc61b6630f901a1c3f1ee4ad9521a25ab9cba40a6112f0bf503c\": not found" Apr 12 18:35:31.378805 kubelet[2429]: I0412 18:35:31.378804 2429 scope.go:117] "RemoveContainer" containerID="6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5" Apr 12 18:35:31.379029 env[1340]: time="2024-04-12T18:35:31.378983206Z" level=error msg="ContainerStatus for \"6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5\": not found" Apr 12 18:35:31.379261 kubelet[2429]: E0412 18:35:31.379240 2429 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5\": not found" containerID="6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5" Apr 12 18:35:31.379316 kubelet[2429]: I0412 18:35:31.379286 2429 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5"} err="failed to get container status \"6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"6411b6bd92899adabed0bffb99ece4d3357433483ca43f8c25ebe2ad6e6ec5e5\": not found" Apr 12 18:35:31.379316 kubelet[2429]: I0412 18:35:31.379297 2429 scope.go:117] "RemoveContainer" containerID="9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d" Apr 12 18:35:31.379521 env[1340]: time="2024-04-12T18:35:31.379472034Z" level=error msg="ContainerStatus for \"9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d\": not found" Apr 12 18:35:31.379654 kubelet[2429]: E0412 18:35:31.379634 2429 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d\": not found" containerID="9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d" Apr 12 18:35:31.379702 kubelet[2429]: I0412 18:35:31.379666 2429 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d"} err="failed to get container status \"9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e08cb9ffc85574de02539bf55746dcc8e4a96250a29ffafd00759a9a2b0f58d\": not found" Apr 12 18:35:31.379702 kubelet[2429]: I0412 18:35:31.379678 2429 scope.go:117] "RemoveContainer" containerID="8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f" Apr 12 18:35:31.379916 env[1340]: time="2024-04-12T18:35:31.379871905Z" level=error msg="ContainerStatus for \"8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f\": not found" Apr 12 18:35:31.380136 kubelet[2429]: E0412 18:35:31.380123 2429 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f\": not found" containerID="8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f" Apr 12 18:35:31.380240 kubelet[2429]: I0412 18:35:31.380229 2429 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f"} err="failed to get container status \"8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b2a0e36656c17080eef1237067df8cae8b1cadb3f759fdf941ad52ca943bd2f\": not found" Apr 12 18:35:31.380304 kubelet[2429]: I0412 18:35:31.380295 2429 scope.go:117] "RemoveContainer" containerID="0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7" Apr 12 18:35:31.380594 env[1340]: time="2024-04-12T18:35:31.380542929Z" level=error msg="ContainerStatus for \"0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7\": not found" Apr 12 18:35:31.380774 kubelet[2429]: E0412 18:35:31.380757 2429 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7\": not found" containerID="0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7" Apr 12 18:35:31.380816 kubelet[2429]: I0412 18:35:31.380808 2429 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7"} err="failed to get container status \"0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"0aefdca11b657cfde90ba4f3a76d44007e99f7ffe64920812b2c20cf710369a7\": not found" Apr 12 18:35:31.380844 kubelet[2429]: I0412 18:35:31.380819 2429 scope.go:117] "RemoveContainer" containerID="373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd" Apr 12 18:35:31.381992 env[1340]: time="2024-04-12T18:35:31.381958976Z" level=info msg="RemoveContainer for \"373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd\"" Apr 12 18:35:31.393056 env[1340]: time="2024-04-12T18:35:31.393015516Z" level=info msg="RemoveContainer for \"373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd\" returns successfully" Apr 12 18:35:31.394235 kubelet[2429]: I0412 18:35:31.394212 2429 scope.go:117] "RemoveContainer" containerID="373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd" Apr 12 18:35:31.394640 env[1340]: time="2024-04-12T18:35:31.394585640Z" level=error msg="ContainerStatus for \"373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd\": not found" Apr 12 18:35:31.394836 kubelet[2429]: E0412 18:35:31.394822 2429 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd\": not found" containerID="373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd" Apr 12 18:35:31.394941 kubelet[2429]: I0412 18:35:31.394930 2429 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd"} err="failed to get container status \"373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd\": rpc error: code = NotFound desc = an error occurred when try to find container \"373082fe9b3ddde6e668f0cda42fb2f3ccf0ae98720a030260b857cd43295dfd\": not found" Apr 12 18:35:31.601944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0-rootfs.mount: Deactivated successfully. Apr 12 18:35:31.602075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d-rootfs.mount: Deactivated successfully. Apr 12 18:35:31.602136 systemd[1]: var-lib-kubelet-pods-c2722697\x2db602\x2d41d9\x2d8a60\x2dd0b138d8039e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbjx9m.mount: Deactivated successfully. Apr 12 18:35:31.602194 systemd[1]: var-lib-kubelet-pods-d01d15c0\x2d24d1\x2d4d66\x2d8f36\x2d2ff74cf8c3f5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2bqww.mount: Deactivated successfully. Apr 12 18:35:31.602245 systemd[1]: var-lib-kubelet-pods-c2722697\x2db602\x2d41d9\x2d8a60\x2dd0b138d8039e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:35:31.602298 systemd[1]: var-lib-kubelet-pods-c2722697\x2db602\x2d41d9\x2d8a60\x2dd0b138d8039e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:35:31.824544 kubelet[2429]: I0412 18:35:31.824517 2429 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c2722697-b602-41d9-8a60-d0b138d8039e" path="/var/lib/kubelet/pods/c2722697-b602-41d9-8a60-d0b138d8039e/volumes" Apr 12 18:35:31.825451 kubelet[2429]: I0412 18:35:31.825435 2429 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5" path="/var/lib/kubelet/pods/d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5/volumes" Apr 12 18:35:32.619050 sshd[3962]: pam_unix(sshd:session): session closed for user core Apr 12 18:35:32.621750 systemd[1]: sshd@20-10.200.20.12:22-10.200.12.6:53150.service: Deactivated successfully. Apr 12 18:35:32.622473 systemd[1]: session-23.scope: Deactivated successfully. Apr 12 18:35:32.622638 systemd[1]: session-23.scope: Consumed 1.490s CPU time. Apr 12 18:35:32.623046 systemd-logind[1326]: Session 23 logged out. Waiting for processes to exit. Apr 12 18:35:32.624084 systemd-logind[1326]: Removed session 23. Apr 12 18:35:32.686527 systemd[1]: Started sshd@21-10.200.20.12:22-10.200.12.6:53154.service. Apr 12 18:35:32.953611 kubelet[2429]: E0412 18:35:32.953567 2429 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:35:33.085912 sshd[4122]: Accepted publickey for core from 10.200.12.6 port 53154 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:35:33.087582 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:35:33.091991 systemd[1]: Started session-24.scope. Apr 12 18:35:33.093146 systemd-logind[1326]: New session 24 of user core. Apr 12 18:35:34.350460 kubelet[2429]: I0412 18:35:34.350405 2429 topology_manager.go:215] "Topology Admit Handler" podUID="bcc640c2-399e-44ed-a73c-f44daf87a12a" podNamespace="kube-system" podName="cilium-h9j67" Apr 12 18:35:34.350802 kubelet[2429]: E0412 18:35:34.350482 2429 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2722697-b602-41d9-8a60-d0b138d8039e" containerName="apply-sysctl-overwrites" Apr 12 18:35:34.350802 kubelet[2429]: E0412 18:35:34.350494 2429 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2722697-b602-41d9-8a60-d0b138d8039e" containerName="clean-cilium-state" Apr 12 18:35:34.350802 kubelet[2429]: E0412 18:35:34.350503 2429 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5" containerName="cilium-operator" Apr 12 18:35:34.350802 kubelet[2429]: E0412 18:35:34.350510 2429 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2722697-b602-41d9-8a60-d0b138d8039e" containerName="mount-cgroup" Apr 12 18:35:34.350802 kubelet[2429]: E0412 18:35:34.350517 2429 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2722697-b602-41d9-8a60-d0b138d8039e" containerName="mount-bpf-fs" Apr 12 18:35:34.350802 kubelet[2429]: E0412 18:35:34.350524 2429 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2722697-b602-41d9-8a60-d0b138d8039e" containerName="cilium-agent" Apr 12 18:35:34.350802 kubelet[2429]: I0412 18:35:34.350562 2429 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2722697-b602-41d9-8a60-d0b138d8039e" containerName="cilium-agent" Apr 12 18:35:34.350802 kubelet[2429]: I0412 18:35:34.350573 2429 memory_manager.go:354] "RemoveStaleState removing state" podUID="d01d15c0-24d1-4d66-8f36-2ff74cf8c3f5" containerName="cilium-operator" Apr 12 18:35:34.356021 systemd[1]: Created slice kubepods-burstable-podbcc640c2_399e_44ed_a73c_f44daf87a12a.slice. Apr 12 18:35:34.390146 kubelet[2429]: I0412 18:35:34.390109 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-bpf-maps\") pod \"cilium-h9j67\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " pod="kube-system/cilium-h9j67" Apr 12 18:35:34.390314 kubelet[2429]: I0412 18:35:34.390198 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bcc640c2-399e-44ed-a73c-f44daf87a12a-cilium-ipsec-secrets\") pod \"cilium-h9j67\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " pod="kube-system/cilium-h9j67" Apr 12 18:35:34.390314 kubelet[2429]: I0412 18:35:34.390247 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-cilium-cgroup\") pod \"cilium-h9j67\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " pod="kube-system/cilium-h9j67" Apr 12 18:35:34.390364 kubelet[2429]: I0412 18:35:34.390322 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-etc-cni-netd\") pod \"cilium-h9j67\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " pod="kube-system/cilium-h9j67" Apr 12 18:35:34.390364 kubelet[2429]: I0412 18:35:34.390343 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bcc640c2-399e-44ed-a73c-f44daf87a12a-cilium-config-path\") pod \"cilium-h9j67\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " pod="kube-system/cilium-h9j67" Apr 12 18:35:34.390413 kubelet[2429]: I0412 18:35:34.390366 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-lib-modules\") pod \"cilium-h9j67\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " pod="kube-system/cilium-h9j67" Apr 12 18:35:34.390441 kubelet[2429]: I0412 18:35:34.390437 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-xtables-lock\") pod \"cilium-h9j67\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " pod="kube-system/cilium-h9j67" Apr 12 18:35:34.390495 kubelet[2429]: I0412 18:35:34.390477 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bcc640c2-399e-44ed-a73c-f44daf87a12a-clustermesh-secrets\") pod \"cilium-h9j67\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " pod="kube-system/cilium-h9j67" Apr 12 18:35:34.390530 kubelet[2429]: I0412 18:35:34.390504 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-hostproc\") pod \"cilium-h9j67\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " pod="kube-system/cilium-h9j67" Apr 12 18:35:34.390571 kubelet[2429]: I0412 18:35:34.390554 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqsqz\" (UniqueName: \"kubernetes.io/projected/bcc640c2-399e-44ed-a73c-f44daf87a12a-kube-api-access-gqsqz\") pod \"cilium-h9j67\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " pod="kube-system/cilium-h9j67" Apr 12 18:35:34.390608 kubelet[2429]: I0412 18:35:34.390582 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-cilium-run\") pod \"cilium-h9j67\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " pod="kube-system/cilium-h9j67" Apr 12 18:35:34.390644 kubelet[2429]: I0412 18:35:34.390630 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-cni-path\") pod \"cilium-h9j67\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " pod="kube-system/cilium-h9j67" Apr 12 18:35:34.390677 kubelet[2429]: I0412 18:35:34.390656 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-host-proc-sys-net\") pod \"cilium-h9j67\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " pod="kube-system/cilium-h9j67" Apr 12 18:35:34.390677 kubelet[2429]: I0412 18:35:34.390674 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bcc640c2-399e-44ed-a73c-f44daf87a12a-hubble-tls\") pod \"cilium-h9j67\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " pod="kube-system/cilium-h9j67" Apr 12 18:35:34.390738 kubelet[2429]: I0412 18:35:34.390724 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-host-proc-sys-kernel\") pod \"cilium-h9j67\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " pod="kube-system/cilium-h9j67" Apr 12 18:35:34.400013 sshd[4122]: pam_unix(sshd:session): session closed for user core Apr 12 18:35:34.402971 systemd[1]: sshd@21-10.200.20.12:22-10.200.12.6:53154.service: Deactivated successfully. Apr 12 18:35:34.403713 systemd[1]: session-24.scope: Deactivated successfully. Apr 12 18:35:34.404892 systemd-logind[1326]: Session 24 logged out. Waiting for processes to exit. Apr 12 18:35:34.405942 systemd-logind[1326]: Removed session 24. Apr 12 18:35:34.466521 systemd[1]: Started sshd@22-10.200.20.12:22-10.200.12.6:53166.service. Apr 12 18:35:34.659846 env[1340]: time="2024-04-12T18:35:34.659083094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h9j67,Uid:bcc640c2-399e-44ed-a73c-f44daf87a12a,Namespace:kube-system,Attempt:0,}" Apr 12 18:35:34.697203 env[1340]: time="2024-04-12T18:35:34.697127805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:35:34.697384 env[1340]: time="2024-04-12T18:35:34.697169444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:35:34.697468 env[1340]: time="2024-04-12T18:35:34.697367719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:35:34.697676 env[1340]: time="2024-04-12T18:35:34.697634833Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785 pid=4146 runtime=io.containerd.runc.v2 Apr 12 18:35:34.712465 systemd[1]: Started cri-containerd-ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785.scope. Apr 12 18:35:34.738775 env[1340]: time="2024-04-12T18:35:34.738729113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h9j67,Uid:bcc640c2-399e-44ed-a73c-f44daf87a12a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785\"" Apr 12 18:35:34.743651 env[1340]: time="2024-04-12T18:35:34.743605559Z" level=info msg="CreateContainer within sandbox \"ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:35:34.784753 env[1340]: time="2024-04-12T18:35:34.784695158Z" level=info msg="CreateContainer within sandbox \"ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5185223de058b139e339f1f4d8bd08ac20aebb0db9a0ac9343b33f989119d074\"" Apr 12 18:35:34.785332 env[1340]: time="2024-04-12T18:35:34.785305144Z" level=info msg="StartContainer for \"5185223de058b139e339f1f4d8bd08ac20aebb0db9a0ac9343b33f989119d074\"" Apr 12 18:35:34.801351 systemd[1]: Started cri-containerd-5185223de058b139e339f1f4d8bd08ac20aebb0db9a0ac9343b33f989119d074.scope. Apr 12 18:35:34.811696 systemd[1]: cri-containerd-5185223de058b139e339f1f4d8bd08ac20aebb0db9a0ac9343b33f989119d074.scope: Deactivated successfully. Apr 12 18:35:34.811973 systemd[1]: Stopped cri-containerd-5185223de058b139e339f1f4d8bd08ac20aebb0db9a0ac9343b33f989119d074.scope. Apr 12 18:35:34.865671 sshd[4132]: Accepted publickey for core from 10.200.12.6 port 53166 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:35:34.867862 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:35:34.872798 systemd[1]: Started session-25.scope. Apr 12 18:35:34.873706 systemd-logind[1326]: New session 25 of user core. Apr 12 18:35:34.886997 env[1340]: time="2024-04-12T18:35:34.886948008Z" level=info msg="shim disconnected" id=5185223de058b139e339f1f4d8bd08ac20aebb0db9a0ac9343b33f989119d074 Apr 12 18:35:34.887310 env[1340]: time="2024-04-12T18:35:34.887290080Z" level=warning msg="cleaning up after shim disconnected" id=5185223de058b139e339f1f4d8bd08ac20aebb0db9a0ac9343b33f989119d074 namespace=k8s.io Apr 12 18:35:34.887394 env[1340]: time="2024-04-12T18:35:34.887379838Z" level=info msg="cleaning up dead shim" Apr 12 18:35:34.894343 env[1340]: time="2024-04-12T18:35:34.894299076Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:35:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4207 runtime=io.containerd.runc.v2\ntime=\"2024-04-12T18:35:34Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5185223de058b139e339f1f4d8bd08ac20aebb0db9a0ac9343b33f989119d074/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Apr 12 18:35:34.894867 env[1340]: time="2024-04-12T18:35:34.894722746Z" level=error msg="copy shim log" error="read /proc/self/fd/42: file already closed" Apr 12 18:35:34.895160 env[1340]: time="2024-04-12T18:35:34.895117937Z" level=error msg="Failed to pipe stdout of container \"5185223de058b139e339f1f4d8bd08ac20aebb0db9a0ac9343b33f989119d074\"" error="reading from a closed fifo" Apr 12 18:35:34.895923 env[1340]: time="2024-04-12T18:35:34.895888759Z" level=error msg="Failed to pipe stderr of container \"5185223de058b139e339f1f4d8bd08ac20aebb0db9a0ac9343b33f989119d074\"" error="reading from a closed fifo" Apr 12 18:35:34.901558 env[1340]: time="2024-04-12T18:35:34.901487508Z" level=error msg="StartContainer for \"5185223de058b139e339f1f4d8bd08ac20aebb0db9a0ac9343b33f989119d074\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Apr 12 18:35:34.902130 kubelet[2429]: E0412 18:35:34.901783 2429 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5185223de058b139e339f1f4d8bd08ac20aebb0db9a0ac9343b33f989119d074" Apr 12 18:35:34.902130 kubelet[2429]: E0412 18:35:34.901892 2429 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Apr 12 18:35:34.902130 kubelet[2429]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Apr 12 18:35:34.902130 kubelet[2429]: rm /hostbin/cilium-mount Apr 12 18:35:34.902311 kubelet[2429]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-gqsqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-h9j67_kube-system(bcc640c2-399e-44ed-a73c-f44daf87a12a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Apr 12 18:35:34.902388 kubelet[2429]: E0412 18:35:34.901933 2429 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-h9j67" podUID="bcc640c2-399e-44ed-a73c-f44daf87a12a" Apr 12 18:35:35.228254 sshd[4132]: pam_unix(sshd:session): session closed for user core Apr 12 18:35:35.230887 systemd[1]: sshd@22-10.200.20.12:22-10.200.12.6:53166.service: Deactivated successfully. Apr 12 18:35:35.231629 systemd[1]: session-25.scope: Deactivated successfully. Apr 12 18:35:35.232182 systemd-logind[1326]: Session 25 logged out. Waiting for processes to exit. Apr 12 18:35:35.232894 systemd-logind[1326]: Removed session 25. Apr 12 18:35:35.296229 systemd[1]: Started sshd@23-10.200.20.12:22-10.200.12.6:36154.service. Apr 12 18:35:35.324369 env[1340]: time="2024-04-12T18:35:35.324031202Z" level=info msg="StopPodSandbox for \"ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785\"" Apr 12 18:35:35.324369 env[1340]: time="2024-04-12T18:35:35.324102960Z" level=info msg="Container to stop \"5185223de058b139e339f1f4d8bd08ac20aebb0db9a0ac9343b33f989119d074\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:35:35.335343 systemd[1]: cri-containerd-ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785.scope: Deactivated successfully. Apr 12 18:35:35.374622 env[1340]: time="2024-04-12T18:35:35.374573702Z" level=info msg="shim disconnected" id=ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785 Apr 12 18:35:35.375022 env[1340]: time="2024-04-12T18:35:35.374997052Z" level=warning msg="cleaning up after shim disconnected" id=ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785 namespace=k8s.io Apr 12 18:35:35.375139 env[1340]: time="2024-04-12T18:35:35.375124130Z" level=info msg="cleaning up dead shim" Apr 12 18:35:35.383442 env[1340]: time="2024-04-12T18:35:35.383397416Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:35:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4247 runtime=io.containerd.runc.v2\n" Apr 12 18:35:35.383928 env[1340]: time="2024-04-12T18:35:35.383900965Z" level=info msg="TearDown network for sandbox \"ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785\" successfully" Apr 12 18:35:35.384025 env[1340]: time="2024-04-12T18:35:35.384007922Z" level=info msg="StopPodSandbox for \"ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785\" returns successfully" Apr 12 18:35:35.502821 kubelet[2429]: I0412 18:35:35.502712 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-cni-path\") pod \"bcc640c2-399e-44ed-a73c-f44daf87a12a\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " Apr 12 18:35:35.503524 kubelet[2429]: I0412 18:35:35.503486 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-bpf-maps\") pod \"bcc640c2-399e-44ed-a73c-f44daf87a12a\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " Apr 12 18:35:35.503643 kubelet[2429]: I0412 18:35:35.503633 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bcc640c2-399e-44ed-a73c-f44daf87a12a-clustermesh-secrets\") pod \"bcc640c2-399e-44ed-a73c-f44daf87a12a\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " Apr 12 18:35:35.503819 kubelet[2429]: I0412 18:35:35.503807 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-host-proc-sys-net\") pod \"bcc640c2-399e-44ed-a73c-f44daf87a12a\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " Apr 12 18:35:35.503979 kubelet[2429]: I0412 18:35:35.503968 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bcc640c2-399e-44ed-a73c-f44daf87a12a-cilium-ipsec-secrets\") pod \"bcc640c2-399e-44ed-a73c-f44daf87a12a\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " Apr 12 18:35:35.504099 kubelet[2429]: I0412 18:35:35.504077 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-etc-cni-netd\") pod \"bcc640c2-399e-44ed-a73c-f44daf87a12a\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " Apr 12 18:35:35.504201 kubelet[2429]: I0412 18:35:35.504191 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-lib-modules\") pod \"bcc640c2-399e-44ed-a73c-f44daf87a12a\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " Apr 12 18:35:35.504305 kubelet[2429]: I0412 18:35:35.504295 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-cilium-cgroup\") pod \"bcc640c2-399e-44ed-a73c-f44daf87a12a\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " Apr 12 18:35:35.504401 kubelet[2429]: I0412 18:35:35.504390 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bcc640c2-399e-44ed-a73c-f44daf87a12a-cilium-config-path\") pod \"bcc640c2-399e-44ed-a73c-f44daf87a12a\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " Apr 12 18:35:35.504497 kubelet[2429]: I0412 18:35:35.504486 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-cilium-run\") pod \"bcc640c2-399e-44ed-a73c-f44daf87a12a\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " Apr 12 18:35:35.504589 kubelet[2429]: I0412 18:35:35.504578 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-xtables-lock\") pod \"bcc640c2-399e-44ed-a73c-f44daf87a12a\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " Apr 12 18:35:35.504696 kubelet[2429]: I0412 18:35:35.502735 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-cni-path" (OuterVolumeSpecName: "cni-path") pod "bcc640c2-399e-44ed-a73c-f44daf87a12a" (UID: "bcc640c2-399e-44ed-a73c-f44daf87a12a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:35.504755 kubelet[2429]: I0412 18:35:35.504723 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bcc640c2-399e-44ed-a73c-f44daf87a12a" (UID: "bcc640c2-399e-44ed-a73c-f44daf87a12a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:35.504755 kubelet[2429]: I0412 18:35:35.504746 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bcc640c2-399e-44ed-a73c-f44daf87a12a" (UID: "bcc640c2-399e-44ed-a73c-f44daf87a12a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:35.504810 kubelet[2429]: I0412 18:35:35.504763 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bcc640c2-399e-44ed-a73c-f44daf87a12a" (UID: "bcc640c2-399e-44ed-a73c-f44daf87a12a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:35.504862 kubelet[2429]: I0412 18:35:35.504850 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqsqz\" (UniqueName: \"kubernetes.io/projected/bcc640c2-399e-44ed-a73c-f44daf87a12a-kube-api-access-gqsqz\") pod \"bcc640c2-399e-44ed-a73c-f44daf87a12a\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " Apr 12 18:35:35.504951 kubelet[2429]: I0412 18:35:35.504941 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-hostproc\") pod \"bcc640c2-399e-44ed-a73c-f44daf87a12a\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " Apr 12 18:35:35.505048 kubelet[2429]: I0412 18:35:35.505037 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-host-proc-sys-kernel\") pod \"bcc640c2-399e-44ed-a73c-f44daf87a12a\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " Apr 12 18:35:35.506683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785-rootfs.mount: Deactivated successfully. Apr 12 18:35:35.506797 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785-shm.mount: Deactivated successfully. Apr 12 18:35:35.508660 systemd[1]: var-lib-kubelet-pods-bcc640c2\x2d399e\x2d44ed\x2da73c\x2df44daf87a12a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:35:35.510268 kubelet[2429]: I0412 18:35:35.510250 2429 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bcc640c2-399e-44ed-a73c-f44daf87a12a-hubble-tls\") pod \"bcc640c2-399e-44ed-a73c-f44daf87a12a\" (UID: \"bcc640c2-399e-44ed-a73c-f44daf87a12a\") " Apr 12 18:35:35.510411 kubelet[2429]: I0412 18:35:35.510399 2429 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-bpf-maps\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:35.510496 kubelet[2429]: I0412 18:35:35.510487 2429 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-host-proc-sys-net\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:35.510579 kubelet[2429]: I0412 18:35:35.510570 2429 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-lib-modules\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:35.510655 kubelet[2429]: I0412 18:35:35.510646 2429 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-cni-path\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:35.510728 kubelet[2429]: I0412 18:35:35.510255 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bcc640c2-399e-44ed-a73c-f44daf87a12a" (UID: "bcc640c2-399e-44ed-a73c-f44daf87a12a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:35.510797 kubelet[2429]: I0412 18:35:35.510277 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bcc640c2-399e-44ed-a73c-f44daf87a12a" (UID: "bcc640c2-399e-44ed-a73c-f44daf87a12a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:35.510854 kubelet[2429]: I0412 18:35:35.510293 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bcc640c2-399e-44ed-a73c-f44daf87a12a" (UID: "bcc640c2-399e-44ed-a73c-f44daf87a12a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:35.511591 kubelet[2429]: I0412 18:35:35.511546 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-hostproc" (OuterVolumeSpecName: "hostproc") pod "bcc640c2-399e-44ed-a73c-f44daf87a12a" (UID: "bcc640c2-399e-44ed-a73c-f44daf87a12a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:35.511724 kubelet[2429]: I0412 18:35:35.511709 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bcc640c2-399e-44ed-a73c-f44daf87a12a" (UID: "bcc640c2-399e-44ed-a73c-f44daf87a12a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:35.512212 kubelet[2429]: I0412 18:35:35.512178 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcc640c2-399e-44ed-a73c-f44daf87a12a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bcc640c2-399e-44ed-a73c-f44daf87a12a" (UID: "bcc640c2-399e-44ed-a73c-f44daf87a12a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:35:35.512302 kubelet[2429]: I0412 18:35:35.512225 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bcc640c2-399e-44ed-a73c-f44daf87a12a" (UID: "bcc640c2-399e-44ed-a73c-f44daf87a12a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:35:35.513152 kubelet[2429]: I0412 18:35:35.513109 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcc640c2-399e-44ed-a73c-f44daf87a12a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bcc640c2-399e-44ed-a73c-f44daf87a12a" (UID: "bcc640c2-399e-44ed-a73c-f44daf87a12a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:35:35.516753 kubelet[2429]: I0412 18:35:35.515356 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcc640c2-399e-44ed-a73c-f44daf87a12a-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "bcc640c2-399e-44ed-a73c-f44daf87a12a" (UID: "bcc640c2-399e-44ed-a73c-f44daf87a12a"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:35:35.516185 systemd[1]: var-lib-kubelet-pods-bcc640c2\x2d399e\x2d44ed\x2da73c\x2df44daf87a12a-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Apr 12 18:35:35.520209 kubelet[2429]: I0412 18:35:35.518322 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc640c2-399e-44ed-a73c-f44daf87a12a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bcc640c2-399e-44ed-a73c-f44daf87a12a" (UID: "bcc640c2-399e-44ed-a73c-f44daf87a12a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:35:35.518864 systemd[1]: var-lib-kubelet-pods-bcc640c2\x2d399e\x2d44ed\x2da73c\x2df44daf87a12a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:35:35.518966 systemd[1]: var-lib-kubelet-pods-bcc640c2\x2d399e\x2d44ed\x2da73c\x2df44daf87a12a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgqsqz.mount: Deactivated successfully. Apr 12 18:35:35.524115 kubelet[2429]: I0412 18:35:35.524039 2429 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc640c2-399e-44ed-a73c-f44daf87a12a-kube-api-access-gqsqz" (OuterVolumeSpecName: "kube-api-access-gqsqz") pod "bcc640c2-399e-44ed-a73c-f44daf87a12a" (UID: "bcc640c2-399e-44ed-a73c-f44daf87a12a"). InnerVolumeSpecName "kube-api-access-gqsqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:35:35.611822 kubelet[2429]: I0412 18:35:35.611763 2429 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-cilium-cgroup\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:35.611822 kubelet[2429]: I0412 18:35:35.611824 2429 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bcc640c2-399e-44ed-a73c-f44daf87a12a-cilium-config-path\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:35.612018 kubelet[2429]: I0412 18:35:35.611839 2429 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-cilium-run\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:35.612018 kubelet[2429]: I0412 18:35:35.611850 2429 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-xtables-lock\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:35.612018 kubelet[2429]: I0412 18:35:35.611861 2429 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gqsqz\" (UniqueName: \"kubernetes.io/projected/bcc640c2-399e-44ed-a73c-f44daf87a12a-kube-api-access-gqsqz\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:35.612018 kubelet[2429]: I0412 18:35:35.611871 2429 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-hostproc\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:35.612018 kubelet[2429]: I0412 18:35:35.611884 2429 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-host-proc-sys-kernel\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:35.612018 kubelet[2429]: I0412 18:35:35.611895 2429 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bcc640c2-399e-44ed-a73c-f44daf87a12a-hubble-tls\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:35.612018 kubelet[2429]: I0412 18:35:35.611905 2429 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bcc640c2-399e-44ed-a73c-f44daf87a12a-clustermesh-secrets\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:35.612018 kubelet[2429]: I0412 18:35:35.611915 2429 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcc640c2-399e-44ed-a73c-f44daf87a12a-etc-cni-netd\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:35.612250 kubelet[2429]: I0412 18:35:35.611925 2429 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bcc640c2-399e-44ed-a73c-f44daf87a12a-cilium-ipsec-secrets\") on node \"ci-3510.3.3-a-e21a461a74\" DevicePath \"\"" Apr 12 18:35:35.698694 sshd[4229]: Accepted publickey for core from 10.200.12.6 port 36154 ssh2: RSA SHA256:FwI9mp8Uipvmjkr+VYh+76kYXjtYhCPwjtuEb1G3LpI Apr 12 18:35:35.700025 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:35:35.704342 systemd[1]: Started session-26.scope. Apr 12 18:35:35.704653 systemd-logind[1326]: New session 26 of user core. Apr 12 18:35:35.826849 systemd[1]: Removed slice kubepods-burstable-podbcc640c2_399e_44ed_a73c_f44daf87a12a.slice. Apr 12 18:35:36.326228 kubelet[2429]: I0412 18:35:36.326192 2429 scope.go:117] "RemoveContainer" containerID="5185223de058b139e339f1f4d8bd08ac20aebb0db9a0ac9343b33f989119d074" Apr 12 18:35:36.329701 env[1340]: time="2024-04-12T18:35:36.329402585Z" level=info msg="RemoveContainer for \"5185223de058b139e339f1f4d8bd08ac20aebb0db9a0ac9343b33f989119d074\"" Apr 12 18:35:36.342201 env[1340]: time="2024-04-12T18:35:36.341989652Z" level=info msg="RemoveContainer for \"5185223de058b139e339f1f4d8bd08ac20aebb0db9a0ac9343b33f989119d074\" returns successfully" Apr 12 18:35:36.362432 kubelet[2429]: I0412 18:35:36.362397 2429 topology_manager.go:215] "Topology Admit Handler" podUID="23b468b3-5f0e-41bd-959c-2ed8b638b33c" podNamespace="kube-system" podName="cilium-k4nd9" Apr 12 18:35:36.362652 kubelet[2429]: E0412 18:35:36.362638 2429 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bcc640c2-399e-44ed-a73c-f44daf87a12a" containerName="mount-cgroup" Apr 12 18:35:36.362758 kubelet[2429]: I0412 18:35:36.362745 2429 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcc640c2-399e-44ed-a73c-f44daf87a12a" containerName="mount-cgroup" Apr 12 18:35:36.367773 systemd[1]: Created slice kubepods-burstable-pod23b468b3_5f0e_41bd_959c_2ed8b638b33c.slice. Apr 12 18:35:36.416440 kubelet[2429]: I0412 18:35:36.416399 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/23b468b3-5f0e-41bd-959c-2ed8b638b33c-host-proc-sys-net\") pod \"cilium-k4nd9\" (UID: \"23b468b3-5f0e-41bd-959c-2ed8b638b33c\") " pod="kube-system/cilium-k4nd9" Apr 12 18:35:36.416605 kubelet[2429]: I0412 18:35:36.416450 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23b468b3-5f0e-41bd-959c-2ed8b638b33c-lib-modules\") pod \"cilium-k4nd9\" (UID: \"23b468b3-5f0e-41bd-959c-2ed8b638b33c\") " pod="kube-system/cilium-k4nd9" Apr 12 18:35:36.416605 kubelet[2429]: I0412 18:35:36.416473 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/23b468b3-5f0e-41bd-959c-2ed8b638b33c-hostproc\") pod \"cilium-k4nd9\" (UID: \"23b468b3-5f0e-41bd-959c-2ed8b638b33c\") " pod="kube-system/cilium-k4nd9" Apr 12 18:35:36.416605 kubelet[2429]: I0412 18:35:36.416492 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/23b468b3-5f0e-41bd-959c-2ed8b638b33c-cilium-run\") pod \"cilium-k4nd9\" (UID: \"23b468b3-5f0e-41bd-959c-2ed8b638b33c\") " pod="kube-system/cilium-k4nd9" Apr 12 18:35:36.416605 kubelet[2429]: I0412 18:35:36.416513 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/23b468b3-5f0e-41bd-959c-2ed8b638b33c-cni-path\") pod \"cilium-k4nd9\" (UID: \"23b468b3-5f0e-41bd-959c-2ed8b638b33c\") " pod="kube-system/cilium-k4nd9" Apr 12 18:35:36.416605 kubelet[2429]: I0412 18:35:36.416547 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/23b468b3-5f0e-41bd-959c-2ed8b638b33c-clustermesh-secrets\") pod \"cilium-k4nd9\" (UID: \"23b468b3-5f0e-41bd-959c-2ed8b638b33c\") " pod="kube-system/cilium-k4nd9" Apr 12 18:35:36.416605 kubelet[2429]: I0412 18:35:36.416565 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/23b468b3-5f0e-41bd-959c-2ed8b638b33c-cilium-ipsec-secrets\") pod \"cilium-k4nd9\" (UID: \"23b468b3-5f0e-41bd-959c-2ed8b638b33c\") " pod="kube-system/cilium-k4nd9" Apr 12 18:35:36.416752 kubelet[2429]: I0412 18:35:36.416583 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/23b468b3-5f0e-41bd-959c-2ed8b638b33c-host-proc-sys-kernel\") pod \"cilium-k4nd9\" (UID: \"23b468b3-5f0e-41bd-959c-2ed8b638b33c\") " pod="kube-system/cilium-k4nd9" Apr 12 18:35:36.416752 kubelet[2429]: I0412 18:35:36.416602 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/23b468b3-5f0e-41bd-959c-2ed8b638b33c-hubble-tls\") pod \"cilium-k4nd9\" (UID: \"23b468b3-5f0e-41bd-959c-2ed8b638b33c\") " pod="kube-system/cilium-k4nd9" Apr 12 18:35:36.416752 kubelet[2429]: I0412 18:35:36.416620 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/23b468b3-5f0e-41bd-959c-2ed8b638b33c-bpf-maps\") pod \"cilium-k4nd9\" (UID: \"23b468b3-5f0e-41bd-959c-2ed8b638b33c\") " pod="kube-system/cilium-k4nd9" Apr 12 18:35:36.416752 kubelet[2429]: I0412 18:35:36.416639 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/23b468b3-5f0e-41bd-959c-2ed8b638b33c-etc-cni-netd\") pod \"cilium-k4nd9\" (UID: \"23b468b3-5f0e-41bd-959c-2ed8b638b33c\") " pod="kube-system/cilium-k4nd9" Apr 12 18:35:36.416752 kubelet[2429]: I0412 18:35:36.416658 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23b468b3-5f0e-41bd-959c-2ed8b638b33c-cilium-config-path\") pod \"cilium-k4nd9\" (UID: \"23b468b3-5f0e-41bd-959c-2ed8b638b33c\") " pod="kube-system/cilium-k4nd9" Apr 12 18:35:36.416752 kubelet[2429]: I0412 18:35:36.416676 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/23b468b3-5f0e-41bd-959c-2ed8b638b33c-cilium-cgroup\") pod \"cilium-k4nd9\" (UID: \"23b468b3-5f0e-41bd-959c-2ed8b638b33c\") " pod="kube-system/cilium-k4nd9" Apr 12 18:35:36.416895 kubelet[2429]: I0412 18:35:36.416695 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b8v8\" (UniqueName: \"kubernetes.io/projected/23b468b3-5f0e-41bd-959c-2ed8b638b33c-kube-api-access-4b8v8\") pod \"cilium-k4nd9\" (UID: \"23b468b3-5f0e-41bd-959c-2ed8b638b33c\") " pod="kube-system/cilium-k4nd9" Apr 12 18:35:36.416895 kubelet[2429]: I0412 18:35:36.416720 2429 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23b468b3-5f0e-41bd-959c-2ed8b638b33c-xtables-lock\") pod \"cilium-k4nd9\" (UID: \"23b468b3-5f0e-41bd-959c-2ed8b638b33c\") " pod="kube-system/cilium-k4nd9" Apr 12 18:35:36.671255 env[1340]: time="2024-04-12T18:35:36.670846146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k4nd9,Uid:23b468b3-5f0e-41bd-959c-2ed8b638b33c,Namespace:kube-system,Attempt:0,}" Apr 12 18:35:36.700536 env[1340]: time="2024-04-12T18:35:36.700456816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:35:36.700719 env[1340]: time="2024-04-12T18:35:36.700518375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:35:36.700811 env[1340]: time="2024-04-12T18:35:36.700783329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:35:36.701939 env[1340]: time="2024-04-12T18:35:36.701056642Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/818d6cd36a05ab13f3b39cd48065fae34da1c5c74786eb4b37739bc9283f0299 pid=4284 runtime=io.containerd.runc.v2 Apr 12 18:35:36.710285 systemd[1]: Started cri-containerd-818d6cd36a05ab13f3b39cd48065fae34da1c5c74786eb4b37739bc9283f0299.scope. Apr 12 18:35:36.732365 env[1340]: time="2024-04-12T18:35:36.732324633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k4nd9,Uid:23b468b3-5f0e-41bd-959c-2ed8b638b33c,Namespace:kube-system,Attempt:0,} returns sandbox id \"818d6cd36a05ab13f3b39cd48065fae34da1c5c74786eb4b37739bc9283f0299\"" Apr 12 18:35:36.736185 env[1340]: time="2024-04-12T18:35:36.736139264Z" level=info msg="CreateContainer within sandbox \"818d6cd36a05ab13f3b39cd48065fae34da1c5c74786eb4b37739bc9283f0299\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:35:36.778971 env[1340]: time="2024-04-12T18:35:36.778923787Z" level=info msg="CreateContainer within sandbox \"818d6cd36a05ab13f3b39cd48065fae34da1c5c74786eb4b37739bc9283f0299\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8db2e9a0dbe834b51172a132bf2a0933e76ac4eab9f099eb0aaf3ee420037fee\"" Apr 12 18:35:36.780267 env[1340]: time="2024-04-12T18:35:36.780239836Z" level=info msg="StartContainer for \"8db2e9a0dbe834b51172a132bf2a0933e76ac4eab9f099eb0aaf3ee420037fee\"" Apr 12 18:35:36.795467 systemd[1]: Started cri-containerd-8db2e9a0dbe834b51172a132bf2a0933e76ac4eab9f099eb0aaf3ee420037fee.scope. Apr 12 18:35:36.826546 env[1340]: time="2024-04-12T18:35:36.826489478Z" level=info msg="StartContainer for \"8db2e9a0dbe834b51172a132bf2a0933e76ac4eab9f099eb0aaf3ee420037fee\" returns successfully" Apr 12 18:35:36.831698 systemd[1]: cri-containerd-8db2e9a0dbe834b51172a132bf2a0933e76ac4eab9f099eb0aaf3ee420037fee.scope: Deactivated successfully. Apr 12 18:35:36.872333 env[1340]: time="2024-04-12T18:35:36.872286651Z" level=info msg="shim disconnected" id=8db2e9a0dbe834b51172a132bf2a0933e76ac4eab9f099eb0aaf3ee420037fee Apr 12 18:35:36.872591 env[1340]: time="2024-04-12T18:35:36.872572484Z" level=warning msg="cleaning up after shim disconnected" id=8db2e9a0dbe834b51172a132bf2a0933e76ac4eab9f099eb0aaf3ee420037fee namespace=k8s.io Apr 12 18:35:36.872670 env[1340]: time="2024-04-12T18:35:36.872656042Z" level=info msg="cleaning up dead shim" Apr 12 18:35:36.880003 env[1340]: time="2024-04-12T18:35:36.879958032Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:35:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4364 runtime=io.containerd.runc.v2\n" Apr 12 18:35:37.332216 env[1340]: time="2024-04-12T18:35:37.332173821Z" level=info msg="CreateContainer within sandbox \"818d6cd36a05ab13f3b39cd48065fae34da1c5c74786eb4b37739bc9283f0299\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:35:37.366912 env[1340]: time="2024-04-12T18:35:37.366856854Z" level=info msg="CreateContainer within sandbox \"818d6cd36a05ab13f3b39cd48065fae34da1c5c74786eb4b37739bc9283f0299\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cc9212ccf1fe619dadd3689bfacc91763a3cb4b2d91e0658db1745b9fd99e2b1\"" Apr 12 18:35:37.367782 env[1340]: time="2024-04-12T18:35:37.367755393Z" level=info msg="StartContainer for \"cc9212ccf1fe619dadd3689bfacc91763a3cb4b2d91e0658db1745b9fd99e2b1\"" Apr 12 18:35:37.387292 systemd[1]: Started cri-containerd-cc9212ccf1fe619dadd3689bfacc91763a3cb4b2d91e0658db1745b9fd99e2b1.scope. Apr 12 18:35:37.423138 env[1340]: time="2024-04-12T18:35:37.423054346Z" level=info msg="StartContainer for \"cc9212ccf1fe619dadd3689bfacc91763a3cb4b2d91e0658db1745b9fd99e2b1\" returns successfully" Apr 12 18:35:37.430912 systemd[1]: cri-containerd-cc9212ccf1fe619dadd3689bfacc91763a3cb4b2d91e0658db1745b9fd99e2b1.scope: Deactivated successfully. Apr 12 18:35:37.461486 env[1340]: time="2024-04-12T18:35:37.461430292Z" level=info msg="shim disconnected" id=cc9212ccf1fe619dadd3689bfacc91763a3cb4b2d91e0658db1745b9fd99e2b1 Apr 12 18:35:37.461486 env[1340]: time="2024-04-12T18:35:37.461481931Z" level=warning msg="cleaning up after shim disconnected" id=cc9212ccf1fe619dadd3689bfacc91763a3cb4b2d91e0658db1745b9fd99e2b1 namespace=k8s.io Apr 12 18:35:37.461486 env[1340]: time="2024-04-12T18:35:37.461492971Z" level=info msg="cleaning up dead shim" Apr 12 18:35:37.469392 env[1340]: time="2024-04-12T18:35:37.469333428Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:35:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4424 runtime=io.containerd.runc.v2\n" Apr 12 18:35:37.824364 kubelet[2429]: I0412 18:35:37.824337 2429 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bcc640c2-399e-44ed-a73c-f44daf87a12a" path="/var/lib/kubelet/pods/bcc640c2-399e-44ed-a73c-f44daf87a12a/volumes" Apr 12 18:35:37.827645 env[1340]: time="2024-04-12T18:35:37.827603128Z" level=info msg="StopPodSandbox for \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\"" Apr 12 18:35:37.827767 env[1340]: time="2024-04-12T18:35:37.827699766Z" level=info msg="TearDown network for sandbox \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\" successfully" Apr 12 18:35:37.827767 env[1340]: time="2024-04-12T18:35:37.827736285Z" level=info msg="StopPodSandbox for \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\" returns successfully" Apr 12 18:35:37.828285 env[1340]: time="2024-04-12T18:35:37.828184075Z" level=info msg="RemovePodSandbox for \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\"" Apr 12 18:35:37.828360 env[1340]: time="2024-04-12T18:35:37.828292112Z" level=info msg="Forcibly stopping sandbox \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\"" Apr 12 18:35:37.828416 env[1340]: time="2024-04-12T18:35:37.828390270Z" level=info msg="TearDown network for sandbox \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\" successfully" Apr 12 18:35:37.838469 env[1340]: time="2024-04-12T18:35:37.838418397Z" level=info msg="RemovePodSandbox \"87896b8131e11030d5deaa7f57b5e8170ca5be52a7c1b6721f58ea4135047ef0\" returns successfully" Apr 12 18:35:37.839049 env[1340]: time="2024-04-12T18:35:37.839010383Z" level=info msg="StopPodSandbox for \"9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d\"" Apr 12 18:35:37.839182 env[1340]: time="2024-04-12T18:35:37.839128140Z" level=info msg="TearDown network for sandbox \"9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d\" successfully" Apr 12 18:35:37.839220 env[1340]: time="2024-04-12T18:35:37.839179779Z" level=info msg="StopPodSandbox for \"9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d\" returns successfully" Apr 12 18:35:37.840567 env[1340]: time="2024-04-12T18:35:37.839464772Z" level=info msg="RemovePodSandbox for \"9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d\"" Apr 12 18:35:37.840567 env[1340]: time="2024-04-12T18:35:37.839493732Z" level=info msg="Forcibly stopping sandbox \"9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d\"" Apr 12 18:35:37.840567 env[1340]: time="2024-04-12T18:35:37.839556450Z" level=info msg="TearDown network for sandbox \"9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d\" successfully" Apr 12 18:35:37.849284 env[1340]: time="2024-04-12T18:35:37.849152307Z" level=info msg="RemovePodSandbox \"9fb53db37a1fc80daa1dc670b0377edaeb9f6155e04726124106b7e82c2cd61d\" returns successfully" Apr 12 18:35:37.849786 env[1340]: time="2024-04-12T18:35:37.849752573Z" level=info msg="StopPodSandbox for \"ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785\"" Apr 12 18:35:37.849914 env[1340]: time="2024-04-12T18:35:37.849863610Z" level=info msg="TearDown network for sandbox \"ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785\" successfully" Apr 12 18:35:37.849956 env[1340]: time="2024-04-12T18:35:37.849912049Z" level=info msg="StopPodSandbox for \"ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785\" returns successfully" Apr 12 18:35:37.850333 env[1340]: time="2024-04-12T18:35:37.850301760Z" level=info msg="RemovePodSandbox for \"ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785\"" Apr 12 18:35:37.850416 env[1340]: time="2024-04-12T18:35:37.850340879Z" level=info msg="Forcibly stopping sandbox \"ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785\"" Apr 12 18:35:37.850452 env[1340]: time="2024-04-12T18:35:37.850415957Z" level=info msg="TearDown network for sandbox \"ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785\" successfully" Apr 12 18:35:37.858778 env[1340]: time="2024-04-12T18:35:37.858731884Z" level=info msg="RemovePodSandbox \"ef9a61296cf733816e4f01d6e5528c5a130271ef54b11a9a3d6fcc52e56e1785\" returns successfully" Apr 12 18:35:37.954995 kubelet[2429]: E0412 18:35:37.954962 2429 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:35:37.991650 kubelet[2429]: W0412 18:35:37.991607 2429 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcc640c2_399e_44ed_a73c_f44daf87a12a.slice/cri-containerd-5185223de058b139e339f1f4d8bd08ac20aebb0db9a0ac9343b33f989119d074.scope WatchSource:0}: container "5185223de058b139e339f1f4d8bd08ac20aebb0db9a0ac9343b33f989119d074" in namespace "k8s.io": not found Apr 12 18:35:38.335270 env[1340]: time="2024-04-12T18:35:38.334785372Z" level=info msg="CreateContainer within sandbox \"818d6cd36a05ab13f3b39cd48065fae34da1c5c74786eb4b37739bc9283f0299\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:35:38.361934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3341904315.mount: Deactivated successfully. Apr 12 18:35:38.378281 env[1340]: time="2024-04-12T18:35:38.378233122Z" level=info msg="CreateContainer within sandbox \"818d6cd36a05ab13f3b39cd48065fae34da1c5c74786eb4b37739bc9283f0299\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0c9e1e67b31442d997d8aaf033c7149ebb9dfaf416e399b021b60f37f4bc1935\"" Apr 12 18:35:38.378967 env[1340]: time="2024-04-12T18:35:38.378932266Z" level=info msg="StartContainer for \"0c9e1e67b31442d997d8aaf033c7149ebb9dfaf416e399b021b60f37f4bc1935\"" Apr 12 18:35:38.394351 systemd[1]: Started cri-containerd-0c9e1e67b31442d997d8aaf033c7149ebb9dfaf416e399b021b60f37f4bc1935.scope. Apr 12 18:35:38.424922 systemd[1]: cri-containerd-0c9e1e67b31442d997d8aaf033c7149ebb9dfaf416e399b021b60f37f4bc1935.scope: Deactivated successfully. Apr 12 18:35:38.432961 env[1340]: time="2024-04-12T18:35:38.431983193Z" level=info msg="StartContainer for \"0c9e1e67b31442d997d8aaf033c7149ebb9dfaf416e399b021b60f37f4bc1935\" returns successfully" Apr 12 18:35:38.466832 env[1340]: time="2024-04-12T18:35:38.466778944Z" level=info msg="shim disconnected" id=0c9e1e67b31442d997d8aaf033c7149ebb9dfaf416e399b021b60f37f4bc1935 Apr 12 18:35:38.466832 env[1340]: time="2024-04-12T18:35:38.466830862Z" level=warning msg="cleaning up after shim disconnected" id=0c9e1e67b31442d997d8aaf033c7149ebb9dfaf416e399b021b60f37f4bc1935 namespace=k8s.io Apr 12 18:35:38.467098 env[1340]: time="2024-04-12T18:35:38.466843182Z" level=info msg="cleaning up dead shim" Apr 12 18:35:38.479764 env[1340]: time="2024-04-12T18:35:38.478138880Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:35:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4486 runtime=io.containerd.runc.v2\n" Apr 12 18:35:39.339755 env[1340]: time="2024-04-12T18:35:39.339704301Z" level=info msg="CreateContainer within sandbox \"818d6cd36a05ab13f3b39cd48065fae34da1c5c74786eb4b37739bc9283f0299\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:35:39.382916 env[1340]: time="2024-04-12T18:35:39.382866698Z" level=info msg="CreateContainer within sandbox \"818d6cd36a05ab13f3b39cd48065fae34da1c5c74786eb4b37739bc9283f0299\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ca8be3ac2a6eed1e239a45b01476a13d3ddc527efd26933a36419ba2fbdb1230\"" Apr 12 18:35:39.383885 env[1340]: time="2024-04-12T18:35:39.383854556Z" level=info msg="StartContainer for \"ca8be3ac2a6eed1e239a45b01476a13d3ddc527efd26933a36419ba2fbdb1230\"" Apr 12 18:35:39.404686 systemd[1]: Started cri-containerd-ca8be3ac2a6eed1e239a45b01476a13d3ddc527efd26933a36419ba2fbdb1230.scope. Apr 12 18:35:39.431911 systemd[1]: cri-containerd-ca8be3ac2a6eed1e239a45b01476a13d3ddc527efd26933a36419ba2fbdb1230.scope: Deactivated successfully. Apr 12 18:35:39.433054 env[1340]: time="2024-04-12T18:35:39.432949296Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23b468b3_5f0e_41bd_959c_2ed8b638b33c.slice/cri-containerd-ca8be3ac2a6eed1e239a45b01476a13d3ddc527efd26933a36419ba2fbdb1230.scope/memory.events\": no such file or directory" Apr 12 18:35:39.440572 env[1340]: time="2024-04-12T18:35:39.440500160Z" level=info msg="StartContainer for \"ca8be3ac2a6eed1e239a45b01476a13d3ddc527efd26933a36419ba2fbdb1230\" returns successfully" Apr 12 18:35:39.470399 env[1340]: time="2024-04-12T18:35:39.470352187Z" level=info msg="shim disconnected" id=ca8be3ac2a6eed1e239a45b01476a13d3ddc527efd26933a36419ba2fbdb1230 Apr 12 18:35:39.470661 env[1340]: time="2024-04-12T18:35:39.470641141Z" level=warning msg="cleaning up after shim disconnected" id=ca8be3ac2a6eed1e239a45b01476a13d3ddc527efd26933a36419ba2fbdb1230 namespace=k8s.io Apr 12 18:35:39.470724 env[1340]: time="2024-04-12T18:35:39.470710659Z" level=info msg="cleaning up dead shim" Apr 12 18:35:39.482219 env[1340]: time="2024-04-12T18:35:39.482178673Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:35:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4545 runtime=io.containerd.runc.v2\n" Apr 12 18:35:39.529750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca8be3ac2a6eed1e239a45b01476a13d3ddc527efd26933a36419ba2fbdb1230-rootfs.mount: Deactivated successfully. Apr 12 18:35:40.344326 env[1340]: time="2024-04-12T18:35:40.344276387Z" level=info msg="CreateContainer within sandbox \"818d6cd36a05ab13f3b39cd48065fae34da1c5c74786eb4b37739bc9283f0299\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:35:40.373535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2020392288.mount: Deactivated successfully. Apr 12 18:35:40.378220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4100252542.mount: Deactivated successfully. Apr 12 18:35:40.393444 env[1340]: time="2024-04-12T18:35:40.393393489Z" level=info msg="CreateContainer within sandbox \"818d6cd36a05ab13f3b39cd48065fae34da1c5c74786eb4b37739bc9283f0299\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8b2726907b7cca136c1745ed5cd3931d928ef379f7ddb16d7dca6fb3c38c72a2\"" Apr 12 18:35:40.394193 env[1340]: time="2024-04-12T18:35:40.394159871Z" level=info msg="StartContainer for \"8b2726907b7cca136c1745ed5cd3931d928ef379f7ddb16d7dca6fb3c38c72a2\"" Apr 12 18:35:40.408691 systemd[1]: Started cri-containerd-8b2726907b7cca136c1745ed5cd3931d928ef379f7ddb16d7dca6fb3c38c72a2.scope. Apr 12 18:35:40.444378 env[1340]: time="2024-04-12T18:35:40.444321348Z" level=info msg="StartContainer for \"8b2726907b7cca136c1745ed5cd3931d928ef379f7ddb16d7dca6fb3c38c72a2\" returns successfully" Apr 12 18:35:40.835099 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Apr 12 18:35:41.104932 kubelet[2429]: W0412 18:35:41.104801 2429 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23b468b3_5f0e_41bd_959c_2ed8b638b33c.slice/cri-containerd-8db2e9a0dbe834b51172a132bf2a0933e76ac4eab9f099eb0aaf3ee420037fee.scope WatchSource:0}: task 8db2e9a0dbe834b51172a132bf2a0933e76ac4eab9f099eb0aaf3ee420037fee not found: not found Apr 12 18:35:41.359948 kubelet[2429]: I0412 18:35:41.359823 2429 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-k4nd9" podStartSLOduration=5.359783731 podStartE2EDuration="5.359783731s" podCreationTimestamp="2024-04-12 18:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:35:41.359136906 +0000 UTC m=+243.685567962" watchObservedRunningTime="2024-04-12 18:35:41.359783731 +0000 UTC m=+243.686214787" Apr 12 18:35:42.136510 systemd[1]: run-containerd-runc-k8s.io-8b2726907b7cca136c1745ed5cd3931d928ef379f7ddb16d7dca6fb3c38c72a2-runc.VboYL1.mount: Deactivated successfully. Apr 12 18:35:43.419417 systemd-networkd[1486]: lxc_health: Link UP Apr 12 18:35:43.432107 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:35:43.434297 systemd-networkd[1486]: lxc_health: Gained carrier Apr 12 18:35:44.214915 kubelet[2429]: W0412 18:35:44.214862 2429 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23b468b3_5f0e_41bd_959c_2ed8b638b33c.slice/cri-containerd-cc9212ccf1fe619dadd3689bfacc91763a3cb4b2d91e0658db1745b9fd99e2b1.scope WatchSource:0}: task cc9212ccf1fe619dadd3689bfacc91763a3cb4b2d91e0658db1745b9fd99e2b1 not found: not found Apr 12 18:35:44.989200 systemd-networkd[1486]: lxc_health: Gained IPv6LL Apr 12 18:35:46.486879 systemd[1]: run-containerd-runc-k8s.io-8b2726907b7cca136c1745ed5cd3931d928ef379f7ddb16d7dca6fb3c38c72a2-runc.iP1S6d.mount: Deactivated successfully. Apr 12 18:35:47.325735 kubelet[2429]: W0412 18:35:47.325685 2429 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23b468b3_5f0e_41bd_959c_2ed8b638b33c.slice/cri-containerd-0c9e1e67b31442d997d8aaf033c7149ebb9dfaf416e399b021b60f37f4bc1935.scope WatchSource:0}: task 0c9e1e67b31442d997d8aaf033c7149ebb9dfaf416e399b021b60f37f4bc1935 not found: not found Apr 12 18:35:48.653398 systemd[1]: run-containerd-runc-k8s.io-8b2726907b7cca136c1745ed5cd3931d928ef379f7ddb16d7dca6fb3c38c72a2-runc.M7JuzX.mount: Deactivated successfully. Apr 12 18:35:48.706506 kubelet[2429]: E0412 18:35:48.706385 2429 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42854->127.0.0.1:45197: write tcp 127.0.0.1:42854->127.0.0.1:45197: write: broken pipe Apr 12 18:35:48.771893 sshd[4229]: pam_unix(sshd:session): session closed for user core Apr 12 18:35:48.775422 systemd[1]: sshd@23-10.200.20.12:22-10.200.12.6:36154.service: Deactivated successfully. Apr 12 18:35:48.776182 systemd[1]: session-26.scope: Deactivated successfully. Apr 12 18:35:48.777131 systemd-logind[1326]: Session 26 logged out. Waiting for processes to exit. Apr 12 18:35:48.777884 systemd-logind[1326]: Removed session 26. Apr 12 18:35:50.435078 kubelet[2429]: W0412 18:35:50.435026 2429 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23b468b3_5f0e_41bd_959c_2ed8b638b33c.slice/cri-containerd-ca8be3ac2a6eed1e239a45b01476a13d3ddc527efd26933a36419ba2fbdb1230.scope WatchSource:0}: task ca8be3ac2a6eed1e239a45b01476a13d3ddc527efd26933a36419ba2fbdb1230 not found: not found Apr 12 18:37:07.157500 systemd[1]: cri-containerd-5f06b5eef7f39b1aedddd81de74797f3fe780062ec5a79be1c8ebc879df87f12.scope: Deactivated successfully. Apr 12 18:37:07.157815 systemd[1]: cri-containerd-5f06b5eef7f39b1aedddd81de74797f3fe780062ec5a79be1c8ebc879df87f12.scope: Consumed 5.290s CPU time. Apr 12 18:37:07.176515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f06b5eef7f39b1aedddd81de74797f3fe780062ec5a79be1c8ebc879df87f12-rootfs.mount: Deactivated successfully. Apr 12 18:37:07.193708 env[1340]: time="2024-04-12T18:37:07.193654906Z" level=info msg="shim disconnected" id=5f06b5eef7f39b1aedddd81de74797f3fe780062ec5a79be1c8ebc879df87f12 Apr 12 18:37:07.193708 env[1340]: time="2024-04-12T18:37:07.193710825Z" level=warning msg="cleaning up after shim disconnected" id=5f06b5eef7f39b1aedddd81de74797f3fe780062ec5a79be1c8ebc879df87f12 namespace=k8s.io Apr 12 18:37:07.194147 env[1340]: time="2024-04-12T18:37:07.193721465Z" level=info msg="cleaning up dead shim" Apr 12 18:37:07.201190 env[1340]: time="2024-04-12T18:37:07.201144823Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:37:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5218 runtime=io.containerd.runc.v2\n" Apr 12 18:37:07.498436 kubelet[2429]: I0412 18:37:07.498406 2429 scope.go:117] "RemoveContainer" containerID="5f06b5eef7f39b1aedddd81de74797f3fe780062ec5a79be1c8ebc879df87f12" Apr 12 18:37:07.500628 env[1340]: time="2024-04-12T18:37:07.500593341Z" level=info msg="CreateContainer within sandbox \"dfe8f876f1299a7cd8b532c4aff5a299ad27868c6611db4b0743661c7a493a53\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 12 18:37:07.531606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1607033197.mount: Deactivated successfully. Apr 12 18:37:07.536917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1230614789.mount: Deactivated successfully. Apr 12 18:37:07.548973 env[1340]: time="2024-04-12T18:37:07.548919889Z" level=info msg="CreateContainer within sandbox \"dfe8f876f1299a7cd8b532c4aff5a299ad27868c6611db4b0743661c7a493a53\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"491266883d1a3c09415c2988f44425fd0bb03c84f9b3e189a5d2e3266a662e5e\"" Apr 12 18:37:07.549659 env[1340]: time="2024-04-12T18:37:07.549632793Z" level=info msg="StartContainer for \"491266883d1a3c09415c2988f44425fd0bb03c84f9b3e189a5d2e3266a662e5e\"" Apr 12 18:37:07.564355 systemd[1]: Started cri-containerd-491266883d1a3c09415c2988f44425fd0bb03c84f9b3e189a5d2e3266a662e5e.scope. Apr 12 18:37:07.611117 env[1340]: time="2024-04-12T18:37:07.611046096Z" level=info msg="StartContainer for \"491266883d1a3c09415c2988f44425fd0bb03c84f9b3e189a5d2e3266a662e5e\" returns successfully" Apr 12 18:37:10.942551 kubelet[2429]: E0412 18:37:10.942522 2429 event.go:346] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.12:57814->10.200.20.30:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-3510.3.3-a-e21a461a74.17c59c3d52f2318a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3510.3.3-a-e21a461a74,UID:3e4868b618f0db5237f1e85b851ead58,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3510.3.3-a-e21a461a74,},FirstTimestamp:2024-04-12 18:37:00.482883978 +0000 UTC m=+322.809315034,LastTimestamp:2024-04-12 18:37:00.482883978 +0000 UTC m=+322.809315034,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.3-a-e21a461a74,}" Apr 12 18:37:11.357481 systemd[1]: cri-containerd-f8ea86a677dcfc926511bdd5f2ae48312a552595f24d7afe1fea43b99267f8fc.scope: Deactivated successfully. Apr 12 18:37:11.357808 systemd[1]: cri-containerd-f8ea86a677dcfc926511bdd5f2ae48312a552595f24d7afe1fea43b99267f8fc.scope: Consumed 2.333s CPU time. Apr 12 18:37:11.377506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8ea86a677dcfc926511bdd5f2ae48312a552595f24d7afe1fea43b99267f8fc-rootfs.mount: Deactivated successfully. Apr 12 18:37:11.407810 env[1340]: time="2024-04-12T18:37:11.407763052Z" level=info msg="shim disconnected" id=f8ea86a677dcfc926511bdd5f2ae48312a552595f24d7afe1fea43b99267f8fc Apr 12 18:37:11.408329 env[1340]: time="2024-04-12T18:37:11.408305985Z" level=warning msg="cleaning up after shim disconnected" id=f8ea86a677dcfc926511bdd5f2ae48312a552595f24d7afe1fea43b99267f8fc namespace=k8s.io Apr 12 18:37:11.408428 env[1340]: time="2024-04-12T18:37:11.408411627Z" level=info msg="cleaning up dead shim" Apr 12 18:37:11.416416 env[1340]: time="2024-04-12T18:37:11.416372301Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:37:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5278 runtime=io.containerd.runc.v2\n" Apr 12 18:37:11.509012 kubelet[2429]: I0412 18:37:11.508978 2429 scope.go:117] "RemoveContainer" containerID="f8ea86a677dcfc926511bdd5f2ae48312a552595f24d7afe1fea43b99267f8fc" Apr 12 18:37:11.511132 env[1340]: time="2024-04-12T18:37:11.511092286Z" level=info msg="CreateContainer within sandbox \"571f6262f9bfd3e2a03a37e683d06c3cf2501c772d291cef2e825a171dda5d60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 12 18:37:11.537246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount138150655.mount: Deactivated successfully. Apr 12 18:37:11.542889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1039446824.mount: Deactivated successfully. Apr 12 18:37:11.558628 env[1340]: time="2024-04-12T18:37:11.558577201Z" level=info msg="CreateContainer within sandbox \"571f6262f9bfd3e2a03a37e683d06c3cf2501c772d291cef2e825a171dda5d60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"0ed9904f30e62459df1b9fe0ec7170b48896d9c2e0313f7ddaffecdbb2e8b6df\"" Apr 12 18:37:11.559293 env[1340]: time="2024-04-12T18:37:11.559265778Z" level=info msg="StartContainer for \"0ed9904f30e62459df1b9fe0ec7170b48896d9c2e0313f7ddaffecdbb2e8b6df\"" Apr 12 18:37:11.574882 systemd[1]: Started cri-containerd-0ed9904f30e62459df1b9fe0ec7170b48896d9c2e0313f7ddaffecdbb2e8b6df.scope. Apr 12 18:37:11.616219 env[1340]: time="2024-04-12T18:37:11.616078040Z" level=info msg="StartContainer for \"0ed9904f30e62459df1b9fe0ec7170b48896d9c2e0313f7ddaffecdbb2e8b6df\" returns successfully" Apr 12 18:37:11.840646 kubelet[2429]: E0412 18:37:11.840613 2429 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.12:58006->10.200.20.30:2379: read: connection timed out" Apr 12 18:37:15.216707 kubelet[2429]: I0412 18:37:15.216671 2429 status_manager.go:853] "Failed to get status for pod" podUID="3e4868b618f0db5237f1e85b851ead58" pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" err="rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout" Apr 12 18:37:21.841158 kubelet[2429]: E0412 18:37:21.841119 2429 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": context deadline exceeded" Apr 12 18:37:31.841912 kubelet[2429]: E0412 18:37:31.841878 2429 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 12 18:37:41.843036 kubelet[2429]: E0412 18:37:41.842782 2429 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 12 18:37:44.945726 kubelet[2429]: E0412 18:37:44.945697 2429 event.go:346] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-apiserver-ci-3510.3.3-a-e21a461a74.17c59c3dbd1a5abd kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3510.3.3-a-e21a461a74,UID:3e4868b618f0db5237f1e85b851ead58,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3510.3.3-a-e21a461a74,},FirstTimestamp:2024-04-12 18:37:02.263900861 +0000 UTC m=+324.590331917,LastTimestamp:2024-04-12 18:37:02.263900861 +0000 UTC m=+324.590331917,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.3-a-e21a461a74,}" Apr 12 18:37:51.843185 kubelet[2429]: E0412 18:37:51.843156 2429 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 12 18:37:51.843595 kubelet[2429]: I0412 18:37:51.843573 2429 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 12 18:37:55.334877 kubelet[2429]: E0412 18:37:55.334838 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-04-12T18:37:45Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-04-12T18:37:45Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-04-12T18:37:45Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-04-12T18:37:45Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-3510.3.3-a-e21a461a74\": Patch \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 12 18:38:01.844079 kubelet[2429]: E0412 18:38:01.844028 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Apr 12 18:38:05.335652 kubelet[2429]: E0412 18:38:05.335620 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 12 18:38:12.045037 kubelet[2429]: E0412 18:38:12.044990 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Apr 12 18:38:12.264427 env[1340]: time="2024-04-12T18:38:12.264380354Z" level=info msg="StopContainer for \"22001ba6c272f94784d75bb549891a6609d9a40fa1c9cf74b0a9d60d085e2606\" with timeout 30 (s)" Apr 12 18:38:12.265097 env[1340]: time="2024-04-12T18:38:12.265048640Z" level=info msg="Stop container \"22001ba6c272f94784d75bb549891a6609d9a40fa1c9cf74b0a9d60d085e2606\" with signal terminated" Apr 12 18:38:15.218354 kubelet[2429]: I0412 18:38:15.218318 2429 status_manager.go:853] "Failed to get status for pod" podUID="d4db9e5adda04a9e3cb4cc0abd85127c" pod="kube-system/kube-scheduler-ci-3510.3.3-a-e21a461a74" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-scheduler-ci-3510.3.3-a-e21a461a74)" Apr 12 18:38:15.336879 kubelet[2429]: E0412 18:38:15.336846 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 12 18:38:18.948447 kubelet[2429]: E0412 18:38:18.948419 2429 event.go:346] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-apiserver-ci-3510.3.3-a-e21a461a74.17c59c3d52f2318a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3510.3.3-a-e21a461a74,UID:3e4868b618f0db5237f1e85b851ead58,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3510.3.3-a-e21a461a74,},FirstTimestamp:2024-04-12 18:37:00.482883978 +0000 UTC m=+322.809315034,LastTimestamp:2024-04-12 18:37:02.491397817 +0000 UTC m=+324.817828913,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.3-a-e21a461a74,}" Apr 12 18:38:22.446696 kubelet[2429]: E0412 18:38:22.446619 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Apr 12 18:38:25.338145 kubelet[2429]: E0412 18:38:25.338117 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 12 18:38:33.247976 kubelet[2429]: E0412 18:38:33.247939 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Apr 12 18:38:34.848757 kubelet[2429]: E0412 18:38:34.848727 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="3.2s" Apr 12 18:38:35.339533 kubelet[2429]: E0412 18:38:35.339438 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 12 18:38:35.339533 kubelet[2429]: E0412 18:38:35.339479 2429 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Apr 12 18:38:38.050264 kubelet[2429]: E0412 18:38:38.050231 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="6.4s" Apr 12 18:38:42.271798 env[1340]: time="2024-04-12T18:38:42.271742424Z" level=info msg="Kill container \"22001ba6c272f94784d75bb549891a6609d9a40fa1c9cf74b0a9d60d085e2606\"" Apr 12 18:38:42.284993 kubelet[2429]: E0412 18:38:42.284562 2429 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/events/kube-apiserver-ci-3510.3.3-a-e21a461a74.17c59c3d52f2318a\": http2: server sent GOAWAY and closed the connection; LastStreamID=635, ErrCode=NO_ERROR, debug=\"\"" event="&Event{ObjectMeta:{kube-apiserver-ci-3510.3.3-a-e21a461a74.17c59c3d52f2318a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3510.3.3-a-e21a461a74,UID:3e4868b618f0db5237f1e85b851ead58,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3510.3.3-a-e21a461a74,},FirstTimestamp:2024-04-12 18:37:00.482883978 +0000 UTC m=+322.809315034,LastTimestamp:2024-04-12 18:37:06.500124935 +0000 UTC m=+328.826555951,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.3-a-e21a461a74,}" Apr 12 18:38:42.285831 systemd[1]: cri-containerd-22001ba6c272f94784d75bb549891a6609d9a40fa1c9cf74b0a9d60d085e2606.scope: Deactivated successfully. Apr 12 18:38:42.286157 systemd[1]: cri-containerd-22001ba6c272f94784d75bb549891a6609d9a40fa1c9cf74b0a9d60d085e2606.scope: Consumed 15.257s CPU time. Apr 12 18:38:42.308543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22001ba6c272f94784d75bb549891a6609d9a40fa1c9cf74b0a9d60d085e2606-rootfs.mount: Deactivated successfully. Apr 12 18:38:42.330988 env[1340]: time="2024-04-12T18:38:42.330938440Z" level=info msg="shim disconnected" id=22001ba6c272f94784d75bb549891a6609d9a40fa1c9cf74b0a9d60d085e2606 Apr 12 18:38:42.330988 env[1340]: time="2024-04-12T18:38:42.330985000Z" level=warning msg="cleaning up after shim disconnected" id=22001ba6c272f94784d75bb549891a6609d9a40fa1c9cf74b0a9d60d085e2606 namespace=k8s.io Apr 12 18:38:42.330988 env[1340]: time="2024-04-12T18:38:42.330994040Z" level=info msg="cleaning up dead shim" Apr 12 18:38:42.337863 env[1340]: time="2024-04-12T18:38:42.337813825Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:38:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5364 runtime=io.containerd.runc.v2\n" Apr 12 18:38:42.342196 env[1340]: time="2024-04-12T18:38:42.342152481Z" level=info msg="StopContainer for \"22001ba6c272f94784d75bb549891a6609d9a40fa1c9cf74b0a9d60d085e2606\" returns successfully" Apr 12 18:38:42.344470 env[1340]: time="2024-04-12T18:38:42.344436569Z" level=info msg="CreateContainer within sandbox \"9d993091ee42cd187992439fef462b94488c2d6b4baff67d6b74bf1abf0073b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:1,}" Apr 12 18:38:42.381707 env[1340]: time="2024-04-12T18:38:42.381627905Z" level=info msg="CreateContainer within sandbox \"9d993091ee42cd187992439fef462b94488c2d6b4baff67d6b74bf1abf0073b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} returns container id \"6654a84016d50f2f4968c14aca201c2d3a3350bdfaf6c12a54d8204e3a709aea\"" Apr 12 18:38:42.382367 env[1340]: time="2024-04-12T18:38:42.382331828Z" level=info msg="StartContainer for \"6654a84016d50f2f4968c14aca201c2d3a3350bdfaf6c12a54d8204e3a709aea\"" Apr 12 18:38:42.400034 systemd[1]: Started cri-containerd-6654a84016d50f2f4968c14aca201c2d3a3350bdfaf6c12a54d8204e3a709aea.scope. Apr 12 18:38:42.437888 env[1340]: time="2024-04-12T18:38:42.437824791Z" level=info msg="StartContainer for \"6654a84016d50f2f4968c14aca201c2d3a3350bdfaf6c12a54d8204e3a709aea\" returns successfully" Apr 12 18:38:53.284765 kubelet[2429]: I0412 18:38:53.284733 2429 status_manager.go:853] "Failed to get status for pod" podUID="3e4868b618f0db5237f1e85b851ead58" pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.3-a-e21a461a74\": net/http: TLS handshake timeout - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=635, ErrCode=NO_ERROR, debug=\"\"" Apr 12 18:38:53.985818 kubelet[2429]: E0412 18:38:53.985783 2429 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/events/kube-apiserver-ci-3510.3.3-a-e21a461a74.17c59c3d52f2318a\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-ci-3510.3.3-a-e21a461a74.17c59c3d52f2318a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3510.3.3-a-e21a461a74,UID:3e4868b618f0db5237f1e85b851ead58,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3510.3.3-a-e21a461a74,},FirstTimestamp:2024-04-12 18:37:00.482883978 +0000 UTC m=+322.809315034,LastTimestamp:2024-04-12 18:37:06.500124935 +0000 UTC m=+328.826555951,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.3-a-e21a461a74,}" Apr 12 18:38:54.451719 kubelet[2429]: E0412 18:38:54.451693 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": context deadline exceeded" interval="7s" Apr 12 18:38:55.738492 kubelet[2429]: E0412 18:38:55.738457 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 12 18:39:02.988409 systemd[1]: cri-containerd-6654a84016d50f2f4968c14aca201c2d3a3350bdfaf6c12a54d8204e3a709aea.scope: Deactivated successfully. Apr 12 18:39:03.007690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6654a84016d50f2f4968c14aca201c2d3a3350bdfaf6c12a54d8204e3a709aea-rootfs.mount: Deactivated successfully. Apr 12 18:39:03.018624 env[1340]: time="2024-04-12T18:39:03.018570294Z" level=info msg="shim disconnected" id=6654a84016d50f2f4968c14aca201c2d3a3350bdfaf6c12a54d8204e3a709aea Apr 12 18:39:03.018624 env[1340]: time="2024-04-12T18:39:03.018620134Z" level=warning msg="cleaning up after shim disconnected" id=6654a84016d50f2f4968c14aca201c2d3a3350bdfaf6c12a54d8204e3a709aea namespace=k8s.io Apr 12 18:39:03.018624 env[1340]: time="2024-04-12T18:39:03.018632094Z" level=info msg="cleaning up dead shim" Apr 12 18:39:03.025971 env[1340]: time="2024-04-12T18:39:03.025911458Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:39:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5429 runtime=io.containerd.runc.v2\n" Apr 12 18:39:03.710025 kubelet[2429]: I0412 18:39:03.709169 2429 scope.go:117] "RemoveContainer" containerID="22001ba6c272f94784d75bb549891a6609d9a40fa1c9cf74b0a9d60d085e2606" Apr 12 18:39:03.710025 kubelet[2429]: I0412 18:39:03.709492 2429 scope.go:117] "RemoveContainer" containerID="6654a84016d50f2f4968c14aca201c2d3a3350bdfaf6c12a54d8204e3a709aea" Apr 12 18:39:03.711715 env[1340]: time="2024-04-12T18:39:03.711661499Z" level=info msg="CreateContainer within sandbox \"9d993091ee42cd187992439fef462b94488c2d6b4baff67d6b74bf1abf0073b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:2,}" Apr 12 18:39:03.712305 env[1340]: time="2024-04-12T18:39:03.712260859Z" level=info msg="RemoveContainer for \"22001ba6c272f94784d75bb549891a6609d9a40fa1c9cf74b0a9d60d085e2606\"" Apr 12 18:39:03.720686 env[1340]: time="2024-04-12T18:39:03.720640544Z" level=info msg="RemoveContainer for \"22001ba6c272f94784d75bb549891a6609d9a40fa1c9cf74b0a9d60d085e2606\" returns successfully" Apr 12 18:39:03.745512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3294944234.mount: Deactivated successfully. Apr 12 18:39:03.774523 env[1340]: time="2024-04-12T18:39:03.774470699Z" level=info msg="CreateContainer within sandbox \"9d993091ee42cd187992439fef462b94488c2d6b4baff67d6b74bf1abf0073b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:2,} returns container id \"1f2bd1e3d2a34dfbe19ac48027dc81ab97d9e1d4b5b84d1777f6d846fae7abe0\"" Apr 12 18:39:03.775032 env[1340]: time="2024-04-12T18:39:03.775004619Z" level=info msg="StartContainer for \"1f2bd1e3d2a34dfbe19ac48027dc81ab97d9e1d4b5b84d1777f6d846fae7abe0\"" Apr 12 18:39:03.797085 systemd[1]: Started cri-containerd-1f2bd1e3d2a34dfbe19ac48027dc81ab97d9e1d4b5b84d1777f6d846fae7abe0.scope. Apr 12 18:39:03.855494 env[1340]: time="2024-04-12T18:39:03.855430271Z" level=info msg="StartContainer for \"1f2bd1e3d2a34dfbe19ac48027dc81ab97d9e1d4b5b84d1777f6d846fae7abe0\" returns successfully" Apr 12 18:39:05.739078 kubelet[2429]: E0412 18:39:05.739035 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?timeout=10s\": context deadline exceeded - error from a previous attempt: read tcp 10.200.20.12:51220->10.200.20.12:6443: read: connection reset by peer" Apr 12 18:39:11.452845 kubelet[2429]: E0412 18:39:11.452803 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": context deadline exceeded - error from a previous attempt: read tcp 10.200.20.12:38376->10.200.20.12:6443: read: connection reset by peer" interval="7s" Apr 12 18:39:13.991294 kubelet[2429]: I0412 18:39:13.991254 2429 status_manager.go:853] "Failed to get status for pod" podUID="ead94c2c69c36521e0c1f95c4806670d" pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.3-a-e21a461a74\": net/http: TLS handshake timeout - error from a previous attempt: read tcp 10.200.20.12:50970->10.200.20.12:6443: read: connection reset by peer" Apr 12 18:39:13.992007 kubelet[2429]: E0412 18:39:13.991973 2429 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/events/kube-apiserver-ci-3510.3.3-a-e21a461a74.17c59c3d52f2318a\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-ci-3510.3.3-a-e21a461a74.17c59c3d52f2318a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3510.3.3-a-e21a461a74,UID:3e4868b618f0db5237f1e85b851ead58,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3510.3.3-a-e21a461a74,},FirstTimestamp:2024-04-12 18:37:00.482883978 +0000 UTC m=+322.809315034,LastTimestamp:2024-04-12 18:37:06.500124935 +0000 UTC m=+328.826555951,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.3-a-e21a461a74,}" Apr 12 18:39:15.739451 kubelet[2429]: E0412 18:39:15.739415 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 12 18:39:23.993423 kubelet[2429]: I0412 18:39:23.993303 2429 status_manager.go:853] "Failed to get status for pod" podUID="ead94c2c69c36521e0c1f95c4806670d" pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.3-a-e21a461a74\": net/http: TLS handshake timeout" Apr 12 18:39:25.289660 kubelet[2429]: E0412 18:39:25.289619 2429 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/events/kube-apiserver-ci-3510.3.3-a-e21a461a74.17c59c3d52f2318a\": read tcp 10.200.20.12:46396->10.200.20.12:6443: read: connection reset by peer" event="&Event{ObjectMeta:{kube-apiserver-ci-3510.3.3-a-e21a461a74.17c59c3d52f2318a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3510.3.3-a-e21a461a74,UID:3e4868b618f0db5237f1e85b851ead58,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3510.3.3-a-e21a461a74,},FirstTimestamp:2024-04-12 18:37:00.482883978 +0000 UTC m=+322.809315034,LastTimestamp:2024-04-12 18:37:06.500124935 +0000 UTC m=+328.826555951,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.3-a-e21a461a74,}" Apr 12 18:39:25.290595 systemd[1]: cri-containerd-1f2bd1e3d2a34dfbe19ac48027dc81ab97d9e1d4b5b84d1777f6d846fae7abe0.scope: Deactivated successfully. Apr 12 18:39:25.290937 systemd[1]: cri-containerd-1f2bd1e3d2a34dfbe19ac48027dc81ab97d9e1d4b5b84d1777f6d846fae7abe0.scope: Consumed 1.484s CPU time. Apr 12 18:39:25.310748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f2bd1e3d2a34dfbe19ac48027dc81ab97d9e1d4b5b84d1777f6d846fae7abe0-rootfs.mount: Deactivated successfully. Apr 12 18:39:25.346766 env[1340]: time="2024-04-12T18:39:25.346714874Z" level=info msg="shim disconnected" id=1f2bd1e3d2a34dfbe19ac48027dc81ab97d9e1d4b5b84d1777f6d846fae7abe0 Apr 12 18:39:25.346766 env[1340]: time="2024-04-12T18:39:25.346763194Z" level=warning msg="cleaning up after shim disconnected" id=1f2bd1e3d2a34dfbe19ac48027dc81ab97d9e1d4b5b84d1777f6d846fae7abe0 namespace=k8s.io Apr 12 18:39:25.346766 env[1340]: time="2024-04-12T18:39:25.346772314Z" level=info msg="cleaning up dead shim" Apr 12 18:39:25.354007 env[1340]: time="2024-04-12T18:39:25.353952700Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:39:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5491 runtime=io.containerd.runc.v2\n" Apr 12 18:39:25.751933 kubelet[2429]: I0412 18:39:25.751839 2429 scope.go:117] "RemoveContainer" containerID="6654a84016d50f2f4968c14aca201c2d3a3350bdfaf6c12a54d8204e3a709aea" Apr 12 18:39:25.752243 kubelet[2429]: I0412 18:39:25.752214 2429 scope.go:117] "RemoveContainer" containerID="1f2bd1e3d2a34dfbe19ac48027dc81ab97d9e1d4b5b84d1777f6d846fae7abe0" Apr 12 18:39:25.753233 env[1340]: time="2024-04-12T18:39:25.753200611Z" level=info msg="RemoveContainer for \"6654a84016d50f2f4968c14aca201c2d3a3350bdfaf6c12a54d8204e3a709aea\"" Apr 12 18:39:25.753340 kubelet[2429]: E0412 18:39:25.753279 2429 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ci-3510.3.3-a-e21a461a74_kube-system(3e4868b618f0db5237f1e85b851ead58)\"" pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" podUID="3e4868b618f0db5237f1e85b851ead58" Apr 12 18:39:25.763373 env[1340]: time="2024-04-12T18:39:25.763330150Z" level=info msg="RemoveContainer for \"6654a84016d50f2f4968c14aca201c2d3a3350bdfaf6c12a54d8204e3a709aea\" returns successfully" Apr 12 18:39:26.289561 kubelet[2429]: E0412 18:39:26.289331 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?timeout=10s\": context deadline exceeded - error from a previous attempt: read tcp 10.200.20.12:59428->10.200.20.12:6443: read: connection reset by peer" Apr 12 18:39:26.289561 kubelet[2429]: E0412 18:39:26.289513 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused - error from a previous attempt: read tcp 10.200.20.12:46366->10.200.20.12:6443: read: connection reset by peer" interval="7s" Apr 12 18:39:26.289561 kubelet[2429]: E0412 18:39:26.289526 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:26.289561 kubelet[2429]: E0412 18:39:26.289537 2429 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Apr 12 18:39:26.291549 kubelet[2429]: I0412 18:39:26.291524 2429 status_manager.go:853] "Failed to get status for pod" podUID="d4db9e5adda04a9e3cb4cc0abd85127c" pod="kube-system/kube-scheduler-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused - error from a previous attempt: read tcp 10.200.20.12:46406->10.200.20.12:6443: read: connection reset by peer" Apr 12 18:39:26.291773 kubelet[2429]: I0412 18:39:26.291755 2429 status_manager.go:853] "Failed to get status for pod" podUID="3e4868b618f0db5237f1e85b851ead58" pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:26.292234 kubelet[2429]: I0412 18:39:26.292219 2429 status_manager.go:853] "Failed to get status for pod" podUID="ead94c2c69c36521e0c1f95c4806670d" pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:26.292521 kubelet[2429]: I0412 18:39:26.292506 2429 status_manager.go:853] "Failed to get status for pod" podUID="d4db9e5adda04a9e3cb4cc0abd85127c" pod="kube-system/kube-scheduler-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:26.292770 kubelet[2429]: I0412 18:39:26.292755 2429 status_manager.go:853] "Failed to get status for pod" podUID="3e4868b618f0db5237f1e85b851ead58" pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:26.293052 kubelet[2429]: I0412 18:39:26.293038 2429 status_manager.go:853] "Failed to get status for pod" podUID="ead94c2c69c36521e0c1f95c4806670d" pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:26.293409 kubelet[2429]: I0412 18:39:26.293393 2429 status_manager.go:853] "Failed to get status for pod" podUID="d4db9e5adda04a9e3cb4cc0abd85127c" pod="kube-system/kube-scheduler-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:26.293688 kubelet[2429]: I0412 18:39:26.293673 2429 status_manager.go:853] "Failed to get status for pod" podUID="3e4868b618f0db5237f1e85b851ead58" pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:27.821684 kubelet[2429]: I0412 18:39:27.821640 2429 status_manager.go:853] "Failed to get status for pod" podUID="ead94c2c69c36521e0c1f95c4806670d" pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:27.822043 kubelet[2429]: I0412 18:39:27.821815 2429 status_manager.go:853] "Failed to get status for pod" podUID="d4db9e5adda04a9e3cb4cc0abd85127c" pod="kube-system/kube-scheduler-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:27.822043 kubelet[2429]: I0412 18:39:27.821957 2429 status_manager.go:853] "Failed to get status for pod" podUID="3e4868b618f0db5237f1e85b851ead58" pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:30.256454 kubelet[2429]: I0412 18:39:30.256390 2429 scope.go:117] "RemoveContainer" containerID="1f2bd1e3d2a34dfbe19ac48027dc81ab97d9e1d4b5b84d1777f6d846fae7abe0" Apr 12 18:39:30.257012 kubelet[2429]: I0412 18:39:30.256743 2429 status_manager.go:853] "Failed to get status for pod" podUID="ead94c2c69c36521e0c1f95c4806670d" pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:30.257305 kubelet[2429]: I0412 18:39:30.257278 2429 status_manager.go:853] "Failed to get status for pod" podUID="d4db9e5adda04a9e3cb4cc0abd85127c" pod="kube-system/kube-scheduler-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:30.257537 kubelet[2429]: I0412 18:39:30.257514 2429 status_manager.go:853] "Failed to get status for pod" podUID="3e4868b618f0db5237f1e85b851ead58" pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:30.257969 kubelet[2429]: E0412 18:39:30.257921 2429 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ci-3510.3.3-a-e21a461a74_kube-system(3e4868b618f0db5237f1e85b851ead58)\"" pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" podUID="3e4868b618f0db5237f1e85b851ead58" Apr 12 18:39:33.290412 kubelet[2429]: E0412 18:39:33.290380 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="7s" Apr 12 18:39:33.772629 kubelet[2429]: I0412 18:39:33.772601 2429 scope.go:117] "RemoveContainer" containerID="1f2bd1e3d2a34dfbe19ac48027dc81ab97d9e1d4b5b84d1777f6d846fae7abe0" Apr 12 18:39:33.773306 kubelet[2429]: E0412 18:39:33.773292 2429 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ci-3510.3.3-a-e21a461a74_kube-system(3e4868b618f0db5237f1e85b851ead58)\"" pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" podUID="3e4868b618f0db5237f1e85b851ead58" Apr 12 18:39:35.291625 kubelet[2429]: E0412 18:39:35.291589 2429 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/events/kube-apiserver-ci-3510.3.3-a-e21a461a74.17c59c3d52f2318a\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-ci-3510.3.3-a-e21a461a74.17c59c3d52f2318a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3510.3.3-a-e21a461a74,UID:3e4868b618f0db5237f1e85b851ead58,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3510.3.3-a-e21a461a74,},FirstTimestamp:2024-04-12 18:37:00.482883978 +0000 UTC m=+322.809315034,LastTimestamp:2024-04-12 18:37:06.500124935 +0000 UTC m=+328.826555951,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.3-a-e21a461a74,}" Apr 12 18:39:36.363788 kubelet[2429]: E0412 18:39:36.363754 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?resourceVersion=0&timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:36.364599 kubelet[2429]: E0412 18:39:36.364578 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:36.364897 kubelet[2429]: E0412 18:39:36.364879 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:36.365197 kubelet[2429]: E0412 18:39:36.365179 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:36.365491 kubelet[2429]: E0412 18:39:36.365474 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:36.365582 kubelet[2429]: E0412 18:39:36.365569 2429 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Apr 12 18:39:37.822425 kubelet[2429]: I0412 18:39:37.822391 2429 status_manager.go:853] "Failed to get status for pod" podUID="d4db9e5adda04a9e3cb4cc0abd85127c" pod="kube-system/kube-scheduler-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:37.823007 kubelet[2429]: I0412 18:39:37.822987 2429 status_manager.go:853] "Failed to get status for pod" podUID="3e4868b618f0db5237f1e85b851ead58" pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:37.823330 kubelet[2429]: I0412 18:39:37.823300 2429 status_manager.go:853] "Failed to get status for pod" podUID="ead94c2c69c36521e0c1f95c4806670d" pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:39:40.291460 kubelet[2429]: E0412 18:39:40.291424 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="7s" Apr 12 18:39:45.293199 kubelet[2429]: E0412 18:39:45.293164 2429 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/events/kube-apiserver-ci-3510.3.3-a-e21a461a74.17c59c3d52f2318a\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-ci-3510.3.3-a-e21a461a74.17c59c3d52f2318a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3510.3.3-a-e21a461a74,UID:3e4868b618f0db5237f1e85b851ead58,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3510.3.3-a-e21a461a74,},FirstTimestamp:2024-04-12 18:37:00.482883978 +0000 UTC m=+322.809315034,LastTimestamp:2024-04-12 18:37:06.500124935 +0000 UTC m=+328.826555951,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.3-a-e21a461a74,}" Apr 12 18:39:45.822435 kubelet[2429]: I0412 18:39:45.822404 2429 scope.go:117] "RemoveContainer" containerID="1f2bd1e3d2a34dfbe19ac48027dc81ab97d9e1d4b5b84d1777f6d846fae7abe0" Apr 12 18:39:45.825454 env[1340]: time="2024-04-12T18:39:45.825400428Z" level=info msg="CreateContainer within sandbox \"9d993091ee42cd187992439fef462b94488c2d6b4baff67d6b74bf1abf0073b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:3,}" Apr 12 18:39:45.850492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1104660311.mount: Deactivated successfully. Apr 12 18:39:45.856599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2441119771.mount: Deactivated successfully. Apr 12 18:39:45.867600 env[1340]: time="2024-04-12T18:39:45.867548616Z" level=info msg="CreateContainer within sandbox \"9d993091ee42cd187992439fef462b94488c2d6b4baff67d6b74bf1abf0073b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:3,} returns container id \"a408718866a4338b161069b1e565db5698fc144a453ab373552bc2aa1c796df9\"" Apr 12 18:39:45.868141 env[1340]: time="2024-04-12T18:39:45.868115294Z" level=info msg="StartContainer for \"a408718866a4338b161069b1e565db5698fc144a453ab373552bc2aa1c796df9\"" Apr 12 18:39:45.888058 systemd[1]: Started cri-containerd-a408718866a4338b161069b1e565db5698fc144a453ab373552bc2aa1c796df9.scope. Apr 12 18:39:45.932661 env[1340]: time="2024-04-12T18:39:45.932602111Z" level=info msg="StartContainer for \"a408718866a4338b161069b1e565db5698fc144a453ab373552bc2aa1c796df9\" returns successfully" Apr 12 18:39:56.259623 update_engine[1329]: I0412 18:39:56.259563 1329 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 12 18:39:56.259623 update_engine[1329]: I0412 18:39:56.259616 1329 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 12 18:39:56.260146 update_engine[1329]: I0412 18:39:56.259792 1329 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 12 18:39:56.260176 update_engine[1329]: I0412 18:39:56.260166 1329 omaha_request_params.cc:62] Current group set to lts Apr 12 18:39:56.260567 update_engine[1329]: I0412 18:39:56.260258 1329 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 12 18:39:56.260567 update_engine[1329]: I0412 18:39:56.260268 1329 update_attempter.cc:643] Scheduling an action processor start. Apr 12 18:39:56.260567 update_engine[1329]: I0412 18:39:56.260284 1329 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 12 18:39:56.260567 update_engine[1329]: I0412 18:39:56.260308 1329 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 12 18:39:56.260697 locksmithd[1416]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 12 18:39:56.420368 update_engine[1329]: I0412 18:39:56.420322 1329 omaha_request_action.cc:270] Posting an Omaha request to disabled Apr 12 18:39:56.420368 update_engine[1329]: I0412 18:39:56.420353 1329 omaha_request_action.cc:271] Request: Apr 12 18:39:56.420368 update_engine[1329]: Apr 12 18:39:56.420368 update_engine[1329]: Apr 12 18:39:56.420368 update_engine[1329]: Apr 12 18:39:56.420368 update_engine[1329]: Apr 12 18:39:56.420368 update_engine[1329]: Apr 12 18:39:56.420368 update_engine[1329]: Apr 12 18:39:56.420368 update_engine[1329]: Apr 12 18:39:56.420368 update_engine[1329]: Apr 12 18:39:56.420368 update_engine[1329]: I0412 18:39:56.420361 1329 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 12 18:39:56.421412 update_engine[1329]: I0412 18:39:56.421388 1329 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 12 18:39:56.422375 update_engine[1329]: I0412 18:39:56.421604 1329 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 12 18:39:56.463785 update_engine[1329]: E0412 18:39:56.463758 1329 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 12 18:39:56.463875 update_engine[1329]: I0412 18:39:56.463857 1329 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 12 18:39:56.553705 kubelet[2429]: E0412 18:39:56.552884 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 12 18:39:56.800386 kubelet[2429]: I0412 18:39:56.800349 2429 status_manager.go:853] "Failed to get status for pod" podUID="ead94c2c69c36521e0c1f95c4806670d" pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.3-a-e21a461a74\": net/http: TLS handshake timeout" Apr 12 18:39:57.292443 kubelet[2429]: E0412 18:39:57.292386 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 12 18:40:05.295441 kubelet[2429]: E0412 18:40:05.295408 2429 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/events/kube-apiserver-ci-3510.3.3-a-e21a461a74.17c59c3d52f2318a\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-ci-3510.3.3-a-e21a461a74.17c59c3d52f2318a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3510.3.3-a-e21a461a74,UID:3e4868b618f0db5237f1e85b851ead58,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3510.3.3-a-e21a461a74,},FirstTimestamp:2024-04-12 18:37:00.482883978 +0000 UTC m=+322.809315034,LastTimestamp:2024-04-12 18:37:06.500124935 +0000 UTC m=+328.826555951,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.3-a-e21a461a74,}" Apr 12 18:40:06.264746 update_engine[1329]: I0412 18:40:06.264703 1329 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 12 18:40:06.265051 update_engine[1329]: I0412 18:40:06.264883 1329 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 12 18:40:06.265162 update_engine[1329]: I0412 18:40:06.265142 1329 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 12 18:40:06.277709 update_engine[1329]: E0412 18:40:06.277669 1329 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 12 18:40:06.277827 update_engine[1329]: I0412 18:40:06.277808 1329 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 12 18:40:06.554929 kubelet[2429]: E0412 18:40:06.554519 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 12 18:40:06.801988 systemd[1]: cri-containerd-a408718866a4338b161069b1e565db5698fc144a453ab373552bc2aa1c796df9.scope: Deactivated successfully. Apr 12 18:40:06.819869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a408718866a4338b161069b1e565db5698fc144a453ab373552bc2aa1c796df9-rootfs.mount: Deactivated successfully. Apr 12 18:40:06.848231 env[1340]: time="2024-04-12T18:40:06.848175721Z" level=info msg="shim disconnected" id=a408718866a4338b161069b1e565db5698fc144a453ab373552bc2aa1c796df9 Apr 12 18:40:06.848231 env[1340]: time="2024-04-12T18:40:06.848226440Z" level=warning msg="cleaning up after shim disconnected" id=a408718866a4338b161069b1e565db5698fc144a453ab373552bc2aa1c796df9 namespace=k8s.io Apr 12 18:40:06.848231 env[1340]: time="2024-04-12T18:40:06.848239080Z" level=info msg="cleaning up dead shim" Apr 12 18:40:06.855166 env[1340]: time="2024-04-12T18:40:06.855109160Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:40:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5563 runtime=io.containerd.runc.v2\n" Apr 12 18:40:07.799706 kubelet[2429]: I0412 18:40:07.799639 2429 status_manager.go:853] "Failed to get status for pod" podUID="d4db9e5adda04a9e3cb4cc0abd85127c" pod="kube-system/kube-scheduler-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused - error from a previous attempt: read tcp 10.200.20.12:37828->10.200.20.12:6443: read: connection reset by peer" Apr 12 18:40:07.800091 kubelet[2429]: I0412 18:40:07.799987 2429 status_manager.go:853] "Failed to get status for pod" podUID="3e4868b618f0db5237f1e85b851ead58" pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:40:07.800462 kubelet[2429]: I0412 18:40:07.800376 2429 status_manager.go:853] "Failed to get status for pod" podUID="ead94c2c69c36521e0c1f95c4806670d" pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:40:07.800621 kubelet[2429]: I0412 18:40:07.800582 2429 status_manager.go:853] "Failed to get status for pod" podUID="d4db9e5adda04a9e3cb4cc0abd85127c" pod="kube-system/kube-scheduler-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:40:07.800999 kubelet[2429]: I0412 18:40:07.800960 2429 status_manager.go:853] "Failed to get status for pod" podUID="3e4868b618f0db5237f1e85b851ead58" pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:40:07.801126 kubelet[2429]: E0412 18:40:07.801047 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.3-a-e21a461a74?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused - error from a previous attempt: read tcp 10.200.20.12:38264->10.200.20.12:6443: read: connection reset by peer" interval="7s" Apr 12 18:40:07.804677 kubelet[2429]: E0412 18:40:07.804654 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused - error from a previous attempt: read tcp 10.200.20.12:38326->10.200.20.12:6443: read: connection reset by peer" Apr 12 18:40:07.804977 kubelet[2429]: E0412 18:40:07.804960 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:40:07.805287 kubelet[2429]: E0412 18:40:07.805270 2429 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ci-3510.3.3-a-e21a461a74\": Get \"https://10.200.20.12:6443/api/v1/nodes/ci-3510.3.3-a-e21a461a74?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:40:07.805386 kubelet[2429]: E0412 18:40:07.805374 2429 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Apr 12 18:40:07.822196 kubelet[2429]: I0412 18:40:07.822158 2429 status_manager.go:853] "Failed to get status for pod" podUID="ead94c2c69c36521e0c1f95c4806670d" pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:40:07.822366 kubelet[2429]: I0412 18:40:07.822342 2429 status_manager.go:853] "Failed to get status for pod" podUID="d4db9e5adda04a9e3cb4cc0abd85127c" pod="kube-system/kube-scheduler-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:40:07.822520 kubelet[2429]: I0412 18:40:07.822498 2429 status_manager.go:853] "Failed to get status for pod" podUID="3e4868b618f0db5237f1e85b851ead58" pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:40:07.837892 kubelet[2429]: I0412 18:40:07.837867 2429 scope.go:117] "RemoveContainer" containerID="1f2bd1e3d2a34dfbe19ac48027dc81ab97d9e1d4b5b84d1777f6d846fae7abe0" Apr 12 18:40:07.838709 kubelet[2429]: I0412 18:40:07.838635 2429 status_manager.go:853] "Failed to get status for pod" podUID="d4db9e5adda04a9e3cb4cc0abd85127c" pod="kube-system/kube-scheduler-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:40:07.838945 kubelet[2429]: I0412 18:40:07.838928 2429 scope.go:117] "RemoveContainer" containerID="a408718866a4338b161069b1e565db5698fc144a453ab373552bc2aa1c796df9" Apr 12 18:40:07.839546 kubelet[2429]: I0412 18:40:07.838949 2429 status_manager.go:853] "Failed to get status for pod" podUID="3e4868b618f0db5237f1e85b851ead58" pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:40:07.839776 kubelet[2429]: I0412 18:40:07.839745 2429 status_manager.go:853] "Failed to get status for pod" podUID="ead94c2c69c36521e0c1f95c4806670d" pod="kube-system/kube-controller-manager-ci-3510.3.3-a-e21a461a74" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.3-a-e21a461a74\": dial tcp 10.200.20.12:6443: connect: connection refused" Apr 12 18:40:07.840121 kubelet[2429]: E0412 18:40:07.840102 2429 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ci-3510.3.3-a-e21a461a74_kube-system(3e4868b618f0db5237f1e85b851ead58)\"" pod="kube-system/kube-apiserver-ci-3510.3.3-a-e21a461a74" podUID="3e4868b618f0db5237f1e85b851ead58" Apr 12 18:40:07.840634 env[1340]: time="2024-04-12T18:40:07.840600696Z" level=info msg="RemoveContainer for \"1f2bd1e3d2a34dfbe19ac48027dc81ab97d9e1d4b5b84d1777f6d846fae7abe0\"" Apr 12 18:40:07.868304 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:07.868625 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:07.868743 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:07.876605 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:07.892914 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:07.907676 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:07.907883 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:07.914863 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:07.921946 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:07.929230 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:07.951334 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:07.951598 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:07.958455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:07.965491 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:07.972628 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:07.994320 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:07.994554 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.001843 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.008912 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.016052 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.035130 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.035370 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.042862 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.050816 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.058716 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.082772 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.083157 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.091484 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.099354 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.106894 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.114503 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.122078 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.129567 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.137011 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.144629 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.152007 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.159804 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.167263 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.174820 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.182138 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.189742 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.197236 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.204552 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.212188 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.219532 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.227181 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.234561 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.241972 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.249202 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.256574 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.263825 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.271538 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.278918 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.286297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.293593 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.300953 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.308194 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.315442 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.323043 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.330342 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.338340 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.345602 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.352971 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.360116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.367390 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.374899 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.382754 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.390125 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.397849 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.405965 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.414025 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.421327 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.428899 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.436433 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.444296 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.451702 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.459368 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.467038 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.475222 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.483116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.490694 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.498762 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.506839 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.514357 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.521680 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.528974 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.536565 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.543786 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.551113 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.558461 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.565759 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.573115 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.580456 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.587873 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.595076 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.602415 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.609764 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.617156 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.624369 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.631703 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.638886 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.646085 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.653424 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.660630 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.668359 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.675660 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.683723 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.691177 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.698744 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.706088 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.713585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.720837 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.728276 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.735571 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.743003 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.750218 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.757588 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.764908 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.772172 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.779529 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.787812 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.794252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.801604 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.809487 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.816899 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.824303 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Apr 12 18:40:08.831496 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001